Test Report: Docker_macOS 15565

                    
                      0deb2b878fee68d58aa080d8e3381e2f3cf3cac2:2023-01-28:27629
                    
                

Test fail (16/306)

x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-000000 config get cpus: exit status 14 (66.050955ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 config get cpus
functional_test.go:1203: expected config error for "out/minikube-darwin-amd64 -p functional-000000 config get cpus" to be -""- but got *"E0128 10:28:46.233242    5683 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 0d8e36d4-304d-4dc6-a15e-f1634c5170ce"*
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-000000 config get cpus: exit status 14 (63.436745ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (255.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-390000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0128 10:31:48.026650    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:33:47.292440    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:47.297562    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:47.309646    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:47.329782    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:47.370647    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:47.452871    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:47.613368    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:47.935517    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:48.576621    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:49.858851    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:52.421157    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:33:57.542122    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:34:04.180906    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:34:07.784321    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:34:28.292308    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:34:31.866387    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-390000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m15.101472494s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-390000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-390000 in cluster ingress-addon-legacy-390000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:30:49.838419    6798 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:30:49.838606    6798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:30:49.838611    6798 out.go:309] Setting ErrFile to fd 2...
	I0128 10:30:49.838615    6798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:30:49.838740    6798 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 10:30:49.839355    6798 out.go:303] Setting JSON to false
	I0128 10:30:49.858368    6798 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1824,"bootTime":1674928825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 10:30:49.858453    6798 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:30:49.880927    6798 out.go:177] * [ingress-addon-legacy-390000] minikube v1.29.0 on Darwin 13.2
	I0128 10:30:49.924633    6798 notify.go:220] Checking for updates...
	I0128 10:30:49.946565    6798 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 10:30:49.967528    6798 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 10:30:49.988903    6798 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:30:50.010937    6798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:30:50.032771    6798 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 10:30:50.055026    6798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 10:30:50.077192    6798 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:30:50.139119    6798 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:30:50.139250    6798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:30:50.281950    6798 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-28 18:30:50.189107224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:30:50.303882    6798 out.go:177] * Using the docker driver based on user configuration
	I0128 10:30:50.325931    6798 start.go:296] selected driver: docker
	I0128 10:30:50.325983    6798 start.go:857] validating driver "docker" against <nil>
	I0128 10:30:50.326002    6798 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 10:30:50.329914    6798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:30:50.471430    6798 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-28 18:30:50.379622425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:30:50.471531    6798 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 10:30:50.471710    6798 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 10:30:50.493818    6798 out.go:177] * Using Docker Desktop driver with root privileges
	I0128 10:30:50.515689    6798 cni.go:84] Creating CNI manager for ""
	I0128 10:30:50.515727    6798 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 10:30:50.515808    6798 start_flags.go:319] config:
	{Name:ingress-addon-legacy-390000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-390000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:30:50.565101    6798 out.go:177] * Starting control plane node ingress-addon-legacy-390000 in cluster ingress-addon-legacy-390000
	I0128 10:30:50.586381    6798 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 10:30:50.608232    6798 out.go:177] * Pulling base image ...
	I0128 10:30:50.650195    6798 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0128 10:30:50.650246    6798 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 10:30:50.701187    6798 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0128 10:30:50.701208    6798 cache.go:57] Caching tarball of preloaded images
	I0128 10:30:50.701409    6798 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0128 10:30:50.723071    6798 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0128 10:30:50.765108    6798 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:30:50.767715    6798 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 10:30:50.767751    6798 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 10:30:50.844267    6798 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0128 10:30:54.586017    6798 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:30:54.586214    6798 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:30:55.207485    6798 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0128 10:30:55.207804    6798 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/config.json ...
	I0128 10:30:55.207829    6798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/config.json: {Name:mk53783bbff2c354fc44ad628a7638fb2ff341b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:30:55.208149    6798 cache.go:193] Successfully downloaded all kic artifacts
	I0128 10:30:55.208176    6798 start.go:364] acquiring machines lock for ingress-addon-legacy-390000: {Name:mkb8fd11d22ece38045135a1b6a3000e53b24b93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 10:30:55.208307    6798 start.go:368] acquired machines lock for "ingress-addon-legacy-390000" in 123.625µs
	I0128 10:30:55.208329    6798 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-390000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-390000 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 10:30:55.208381    6798 start.go:125] createHost starting for "" (driver="docker")
	I0128 10:30:55.229770    6798 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0128 10:30:55.230092    6798 start.go:159] libmachine.API.Create for "ingress-addon-legacy-390000" (driver="docker")
	I0128 10:30:55.230168    6798 client.go:168] LocalClient.Create starting
	I0128 10:30:55.230379    6798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem
	I0128 10:30:55.230472    6798 main.go:141] libmachine: Decoding PEM data...
	I0128 10:30:55.230502    6798 main.go:141] libmachine: Parsing certificate...
	I0128 10:30:55.230604    6798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem
	I0128 10:30:55.230670    6798 main.go:141] libmachine: Decoding PEM data...
	I0128 10:30:55.230688    6798 main.go:141] libmachine: Parsing certificate...
	I0128 10:30:55.252295    6798 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-390000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 10:30:55.310000    6798 cli_runner.go:211] docker network inspect ingress-addon-legacy-390000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 10:30:55.310119    6798 network_create.go:281] running [docker network inspect ingress-addon-legacy-390000] to gather additional debugging logs...
	I0128 10:30:55.310138    6798 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-390000
	W0128 10:30:55.363718    6798 cli_runner.go:211] docker network inspect ingress-addon-legacy-390000 returned with exit code 1
	I0128 10:30:55.363752    6798 network_create.go:284] error running [docker network inspect ingress-addon-legacy-390000]: docker network inspect ingress-addon-legacy-390000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-390000
	I0128 10:30:55.363768    6798 network_create.go:286] output of [docker network inspect ingress-addon-legacy-390000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-390000
	
	** /stderr **
	I0128 10:30:55.363865    6798 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 10:30:55.420728    6798 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005e46a0}
	I0128 10:30:55.420761    6798 network_create.go:123] attempt to create docker network ingress-addon-legacy-390000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0128 10:30:55.420833    6798 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-390000 ingress-addon-legacy-390000
	I0128 10:30:55.507582    6798 network_create.go:107] docker network ingress-addon-legacy-390000 192.168.49.0/24 created
	I0128 10:30:55.507618    6798 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-390000" container
	I0128 10:30:55.507737    6798 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 10:30:55.562514    6798 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-390000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-390000 --label created_by.minikube.sigs.k8s.io=true
	I0128 10:30:55.618190    6798 oci.go:103] Successfully created a docker volume ingress-addon-legacy-390000
	I0128 10:30:55.618335    6798 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-390000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-390000 --entrypoint /usr/bin/test -v ingress-addon-legacy-390000:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 10:30:56.074582    6798 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-390000
	I0128 10:30:56.074624    6798 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0128 10:30:56.074640    6798 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 10:30:56.074751    6798 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-390000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 10:31:02.131899    6798 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-390000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (6.057120872s)
	I0128 10:31:02.131919    6798 kic.go:199] duration metric: took 6.057312 seconds to extract preloaded images to volume
	I0128 10:31:02.132039    6798 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 10:31:02.279184    6798 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-390000 --name ingress-addon-legacy-390000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-390000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-390000 --network ingress-addon-legacy-390000 --ip 192.168.49.2 --volume ingress-addon-legacy-390000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 10:31:02.638074    6798 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-390000 --format={{.State.Running}}
	I0128 10:31:02.700385    6798 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-390000 --format={{.State.Status}}
	I0128 10:31:02.842890    6798 cli_runner.go:164] Run: docker exec ingress-addon-legacy-390000 stat /var/lib/dpkg/alternatives/iptables
	I0128 10:31:02.952562    6798 oci.go:144] the created container "ingress-addon-legacy-390000" has a running status.
	I0128 10:31:02.952591    6798 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa...
	I0128 10:31:03.078445    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0128 10:31:03.078524    6798 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 10:31:03.184925    6798 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-390000 --format={{.State.Status}}
	I0128 10:31:03.244144    6798 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 10:31:03.244163    6798 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-390000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 10:31:03.345869    6798 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-390000 --format={{.State.Status}}
	I0128 10:31:03.404249    6798 machine.go:88] provisioning docker machine ...
	I0128 10:31:03.404295    6798 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-390000"
	I0128 10:31:03.404400    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:03.463162    6798 main.go:141] libmachine: Using SSH client type: native
	I0128 10:31:03.463370    6798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50669 <nil> <nil>}
	I0128 10:31:03.463387    6798 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-390000 && echo "ingress-addon-legacy-390000" | sudo tee /etc/hostname
	I0128 10:31:03.608358    6798 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-390000
	
	I0128 10:31:03.608447    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:03.667609    6798 main.go:141] libmachine: Using SSH client type: native
	I0128 10:31:03.667802    6798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50669 <nil> <nil>}
	I0128 10:31:03.667820    6798 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-390000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-390000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-390000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 10:31:03.802925    6798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 10:31:03.802948    6798 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2556/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2556/.minikube}
	I0128 10:31:03.802967    6798 ubuntu.go:177] setting up certificates
	I0128 10:31:03.802975    6798 provision.go:83] configureAuth start
	I0128 10:31:03.803060    6798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-390000
	I0128 10:31:03.861471    6798 provision.go:138] copyHostCerts
	I0128 10:31:03.861519    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 10:31:03.861577    6798 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem, removing ...
	I0128 10:31:03.861586    6798 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 10:31:03.861700    6798 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem (1082 bytes)
	I0128 10:31:03.861873    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 10:31:03.861913    6798 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem, removing ...
	I0128 10:31:03.861917    6798 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 10:31:03.861988    6798 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem (1123 bytes)
	I0128 10:31:03.862134    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 10:31:03.862170    6798 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem, removing ...
	I0128 10:31:03.862175    6798 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 10:31:03.862245    6798 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem (1679 bytes)
	I0128 10:31:03.862379    6798 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-390000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-390000]
	I0128 10:31:04.068548    6798 provision.go:172] copyRemoteCerts
	I0128 10:31:04.068603    6798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 10:31:04.068658    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:04.127995    6798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50669 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa Username:docker}
	I0128 10:31:04.224430    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0128 10:31:04.224538    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 10:31:04.242749    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0128 10:31:04.242845    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0128 10:31:04.260726    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0128 10:31:04.260811    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 10:31:04.279232    6798 provision.go:86] duration metric: configureAuth took 476.246881ms
	I0128 10:31:04.279246    6798 ubuntu.go:193] setting minikube options for container-runtime
	I0128 10:31:04.279415    6798 config.go:180] Loaded profile config "ingress-addon-legacy-390000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0128 10:31:04.279484    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:04.338740    6798 main.go:141] libmachine: Using SSH client type: native
	I0128 10:31:04.338894    6798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50669 <nil> <nil>}
	I0128 10:31:04.338909    6798 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 10:31:04.476538    6798 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 10:31:04.476558    6798 ubuntu.go:71] root file system type: overlay
	I0128 10:31:04.476704    6798 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 10:31:04.476792    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:04.536918    6798 main.go:141] libmachine: Using SSH client type: native
	I0128 10:31:04.537085    6798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50669 <nil> <nil>}
	I0128 10:31:04.537135    6798 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 10:31:04.679997    6798 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 10:31:04.680113    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:04.740151    6798 main.go:141] libmachine: Using SSH client type: native
	I0128 10:31:04.740333    6798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50669 <nil> <nil>}
	I0128 10:31:04.740353    6798 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 10:31:05.361373    6798 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:31:04.677527899 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 10:31:05.361409    6798 machine.go:91] provisioned docker machine in 1.957150663s
	I0128 10:31:05.361435    6798 client.go:171] LocalClient.Create took 10.131314045s
	I0128 10:31:05.361457    6798 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-390000" took 10.131419264s
	I0128 10:31:05.361465    6798 start.go:300] post-start starting for "ingress-addon-legacy-390000" (driver="docker")
	I0128 10:31:05.361471    6798 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 10:31:05.361591    6798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 10:31:05.361660    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:05.426279    6798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50669 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa Username:docker}
	I0128 10:31:05.523428    6798 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 10:31:05.527045    6798 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 10:31:05.527066    6798 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 10:31:05.527073    6798 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 10:31:05.527078    6798 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 10:31:05.527090    6798 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/addons for local assets ...
	I0128 10:31:05.527193    6798 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/files for local assets ...
	I0128 10:31:05.527374    6798 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> 38492.pem in /etc/ssl/certs
	I0128 10:31:05.527381    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> /etc/ssl/certs/38492.pem
	I0128 10:31:05.527576    6798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 10:31:05.534986    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /etc/ssl/certs/38492.pem (1708 bytes)
	I0128 10:31:05.552529    6798 start.go:303] post-start completed in 191.055878ms
	I0128 10:31:05.553094    6798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-390000
	I0128 10:31:05.611624    6798 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/config.json ...
	I0128 10:31:05.612053    6798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 10:31:05.612107    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:05.671323    6798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50669 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa Username:docker}
	I0128 10:31:05.764054    6798 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 10:31:05.768694    6798 start.go:128] duration metric: createHost completed in 10.56036033s
	I0128 10:31:05.768716    6798 start.go:83] releasing machines lock for "ingress-addon-legacy-390000", held for 10.5604597s
	I0128 10:31:05.768813    6798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-390000
	I0128 10:31:05.827546    6798 ssh_runner.go:195] Run: cat /version.json
	I0128 10:31:05.827581    6798 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0128 10:31:05.827617    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:05.827658    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:05.890422    6798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50669 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa Username:docker}
	I0128 10:31:05.890585    6798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50669 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa Username:docker}
	I0128 10:31:05.981829    6798 ssh_runner.go:195] Run: systemctl --version
	I0128 10:31:06.192250    6798 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 10:31:06.197699    6798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 10:31:06.217638    6798 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 10:31:06.217710    6798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 10:31:06.231574    6798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 10:31:06.239487    6798 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 10:31:06.239517    6798 start.go:483] detecting cgroup driver to use...
	I0128 10:31:06.239549    6798 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 10:31:06.239704    6798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 10:31:06.253209    6798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0128 10:31:06.261828    6798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 10:31:06.270306    6798 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 10:31:06.270360    6798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 10:31:06.279197    6798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 10:31:06.287806    6798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 10:31:06.296202    6798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 10:31:06.304938    6798 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 10:31:06.313244    6798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 10:31:06.322788    6798 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 10:31:06.331139    6798 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 10:31:06.338484    6798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 10:31:06.405866    6798 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 10:31:06.484692    6798 start.go:483] detecting cgroup driver to use...
	I0128 10:31:06.484732    6798 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 10:31:06.484837    6798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 10:31:06.496521    6798 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 10:31:06.496596    6798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 10:31:06.506769    6798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 10:31:06.521959    6798 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 10:31:06.593642    6798 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 10:31:06.690575    6798 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 10:31:06.690592    6798 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 10:31:06.704997    6798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 10:31:06.803694    6798 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 10:31:07.006945    6798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 10:31:07.036272    6798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 10:31:07.088496    6798 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
	I0128 10:31:07.088761    6798 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-390000 dig +short host.docker.internal
	I0128 10:31:07.202396    6798 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 10:31:07.202537    6798 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 10:31:07.207037    6798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 10:31:07.216972    6798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:31:07.276592    6798 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0128 10:31:07.276677    6798 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 10:31:07.300750    6798 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0128 10:31:07.300768    6798 docker.go:560] Images already preloaded, skipping extraction
	I0128 10:31:07.300859    6798 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 10:31:07.325461    6798 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0128 10:31:07.325486    6798 cache_images.go:84] Images are preloaded, skipping loading
	I0128 10:31:07.325575    6798 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 10:31:07.395848    6798 cni.go:84] Creating CNI manager for ""
	I0128 10:31:07.395865    6798 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 10:31:07.395877    6798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 10:31:07.395892    6798 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-390000 NodeName:ingress-addon-legacy-390000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 10:31:07.396026    6798 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-390000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 10:31:07.396115    6798 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-390000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-390000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 10:31:07.396189    6798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0128 10:31:07.404416    6798 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 10:31:07.404554    6798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 10:31:07.411983    6798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0128 10:31:07.424945    6798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0128 10:31:07.437678    6798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0128 10:31:07.450606    6798 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0128 10:31:07.454315    6798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 10:31:07.464049    6798 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000 for IP: 192.168.49.2
	I0128 10:31:07.464067    6798 certs.go:186] acquiring lock for shared ca certs: {Name:mkee0a6d4b79657122da9b64494daa75cd779ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:31:07.464261    6798 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key
	I0128 10:31:07.464399    6798 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key
	I0128 10:31:07.464491    6798 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/client.key
	I0128 10:31:07.464524    6798 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/client.crt with IP's: []
	I0128 10:31:07.631661    6798 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/client.crt ...
	I0128 10:31:07.631672    6798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/client.crt: {Name:mk10e770b02beb4677d20c2824426ba09da1bf88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:31:07.631977    6798 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/client.key ...
	I0128 10:31:07.631985    6798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/client.key: {Name:mkcf0ccd05046d329912ce6a70ed9a3154f3a5ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:31:07.632197    6798 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.key.dd3b5fb2
	I0128 10:31:07.632226    6798 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 10:31:07.748984    6798 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.crt.dd3b5fb2 ...
	I0128 10:31:07.748998    6798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.crt.dd3b5fb2: {Name:mkf150d20aba88d13dfe2da12666c042914355f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:31:07.771367    6798 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.key.dd3b5fb2 ...
	I0128 10:31:07.771397    6798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.key.dd3b5fb2: {Name:mke09741ab1d6debc6b0e9cc73fc3be81e0d31c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:31:07.793604    6798 certs.go:333] copying /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.crt
	I0128 10:31:07.814907    6798 certs.go:337] copying /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.key
	I0128 10:31:07.815308    6798 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.key
	I0128 10:31:07.815343    6798 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.crt with IP's: []
	I0128 10:31:08.203976    6798 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.crt ...
	I0128 10:31:08.203990    6798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.crt: {Name:mkaf1696f8ea6fba787464ddb110f2f3f23505a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:31:08.204292    6798 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.key ...
	I0128 10:31:08.204300    6798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.key: {Name:mk63cd0fb3c41e4a340f8cf13ad531c0d6842cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:31:08.204483    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0128 10:31:08.204513    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0128 10:31:08.204540    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0128 10:31:08.204567    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0128 10:31:08.204588    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0128 10:31:08.204607    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0128 10:31:08.204626    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0128 10:31:08.204647    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0128 10:31:08.204755    6798 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem (1338 bytes)
	W0128 10:31:08.204804    6798 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849_empty.pem, impossibly tiny 0 bytes
	I0128 10:31:08.204830    6798 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 10:31:08.204869    6798 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem (1082 bytes)
	I0128 10:31:08.204908    6798 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem (1123 bytes)
	I0128 10:31:08.204943    6798 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem (1679 bytes)
	I0128 10:31:08.205017    6798 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem (1708 bytes)
	I0128 10:31:08.205048    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem -> /usr/share/ca-certificates/3849.pem
	I0128 10:31:08.205069    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> /usr/share/ca-certificates/38492.pem
	I0128 10:31:08.205093    6798 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:31:08.205593    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 10:31:08.225024    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 10:31:08.242476    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 10:31:08.260272    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/ingress-addon-legacy-390000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0128 10:31:08.277757    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 10:31:08.295178    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 10:31:08.312780    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 10:31:08.330714    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 10:31:08.348237    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem --> /usr/share/ca-certificates/3849.pem (1338 bytes)
	I0128 10:31:08.365913    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /usr/share/ca-certificates/38492.pem (1708 bytes)
	I0128 10:31:08.383663    6798 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 10:31:08.401277    6798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 10:31:08.414637    6798 ssh_runner.go:195] Run: openssl version
	I0128 10:31:08.420943    6798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3849.pem && ln -fs /usr/share/ca-certificates/3849.pem /etc/ssl/certs/3849.pem"
	I0128 10:31:08.429516    6798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3849.pem
	I0128 10:31:08.433597    6798 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:26 /usr/share/ca-certificates/3849.pem
	I0128 10:31:08.433644    6798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3849.pem
	I0128 10:31:08.439397    6798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3849.pem /etc/ssl/certs/51391683.0"
	I0128 10:31:08.447816    6798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38492.pem && ln -fs /usr/share/ca-certificates/38492.pem /etc/ssl/certs/38492.pem"
	I0128 10:31:08.456135    6798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38492.pem
	I0128 10:31:08.460410    6798 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:26 /usr/share/ca-certificates/38492.pem
	I0128 10:31:08.460468    6798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38492.pem
	I0128 10:31:08.465973    6798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38492.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 10:31:08.474262    6798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 10:31:08.483232    6798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:31:08.487224    6798 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:31:08.487264    6798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:31:08.492718    6798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 10:31:08.500685    6798 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-390000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-390000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:31:08.500796    6798 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 10:31:08.523686    6798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 10:31:08.531801    6798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 10:31:08.539916    6798 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 10:31:08.539964    6798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 10:31:08.547450    6798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 10:31:08.547477    6798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 10:31:08.597376    6798 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0128 10:31:08.597453    6798 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 10:31:08.896181    6798 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 10:31:08.896264    6798 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 10:31:08.896420    6798 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 10:31:09.123124    6798 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 10:31:09.123677    6798 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 10:31:09.123719    6798 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0128 10:31:09.196858    6798 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 10:31:09.239249    6798 out.go:204]   - Generating certificates and keys ...
	I0128 10:31:09.239329    6798 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 10:31:09.239396    6798 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 10:31:09.287969    6798 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 10:31:09.426813    6798 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 10:31:09.578113    6798 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 10:31:09.670389    6798 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 10:31:09.763663    6798 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 10:31:09.763792    6798 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-390000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0128 10:31:09.951596    6798 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 10:31:09.951717    6798 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-390000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0128 10:31:10.018407    6798 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 10:31:10.153983    6798 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 10:31:10.211991    6798 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 10:31:10.212070    6798 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 10:31:10.310007    6798 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 10:31:10.373545    6798 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 10:31:10.514420    6798 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 10:31:10.601265    6798 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 10:31:10.601920    6798 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 10:31:10.623709    6798 out.go:204]   - Booting up control plane ...
	I0128 10:31:10.623820    6798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 10:31:10.623905    6798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 10:31:10.623990    6798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 10:31:10.624099    6798 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 10:31:10.624266    6798 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 10:31:50.611021    6798 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 10:31:50.612484    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:31:50.612710    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:31:55.614180    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:31:55.614417    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:32:05.615418    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:32:05.615669    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:32:25.617780    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:32:25.618007    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:33:05.619708    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:33:05.619976    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:33:05.619998    6798 kubeadm.go:322] 
	I0128 10:33:05.620095    6798 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0128 10:33:05.620158    6798 kubeadm.go:322] 		timed out waiting for the condition
	I0128 10:33:05.620173    6798 kubeadm.go:322] 
	I0128 10:33:05.620244    6798 kubeadm.go:322] 	This error is likely caused by:
	I0128 10:33:05.620321    6798 kubeadm.go:322] 		- The kubelet is not running
	I0128 10:33:05.620486    6798 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 10:33:05.620496    6798 kubeadm.go:322] 
	I0128 10:33:05.620599    6798 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 10:33:05.620651    6798 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0128 10:33:05.620691    6798 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0128 10:33:05.620699    6798 kubeadm.go:322] 
	I0128 10:33:05.620859    6798 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 10:33:05.620958    6798 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0128 10:33:05.620969    6798 kubeadm.go:322] 
	I0128 10:33:05.621046    6798 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0128 10:33:05.621115    6798 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0128 10:33:05.621218    6798 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0128 10:33:05.621247    6798 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0128 10:33:05.621253    6798 kubeadm.go:322] 
	I0128 10:33:05.624436    6798 kubeadm.go:322] W0128 18:31:08.596729    1163 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0128 10:33:05.624566    6798 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 10:33:05.624622    6798 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 10:33:05.624731    6798 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
	I0128 10:33:05.624838    6798 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 10:33:05.624945    6798 kubeadm.go:322] W0128 18:31:10.605741    1163 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0128 10:33:05.625035    6798 kubeadm.go:322] W0128 18:31:10.606767    1163 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0128 10:33:05.625101    6798 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 10:33:05.625163    6798 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 10:33:05.625352    6798 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-390000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-390000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:31:08.596729    1163 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:31:10.605741    1163 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:31:10.606767    1163 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-390000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-390000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:31:08.596729    1163 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:31:10.605741    1163 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:31:10.606767    1163 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 10:33:05.625385    6798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 10:33:06.041226    6798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 10:33:06.050862    6798 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 10:33:06.050922    6798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 10:33:06.058479    6798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 10:33:06.058496    6798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 10:33:06.106359    6798 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0128 10:33:06.106401    6798 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 10:33:06.400048    6798 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 10:33:06.400139    6798 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 10:33:06.400247    6798 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 10:33:06.622551    6798 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 10:33:06.623045    6798 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 10:33:06.623115    6798 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0128 10:33:06.696160    6798 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 10:33:06.717641    6798 out.go:204]   - Generating certificates and keys ...
	I0128 10:33:06.717708    6798 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 10:33:06.717787    6798 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 10:33:06.717877    6798 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 10:33:06.717928    6798 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 10:33:06.717982    6798 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 10:33:06.718065    6798 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 10:33:06.718128    6798 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 10:33:06.718211    6798 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 10:33:06.718290    6798 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 10:33:06.718338    6798 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 10:33:06.718386    6798 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 10:33:06.718446    6798 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 10:33:06.903323    6798 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 10:33:06.984680    6798 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 10:33:07.099735    6798 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 10:33:07.282505    6798 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 10:33:07.283111    6798 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 10:33:07.304955    6798 out.go:204]   - Booting up control plane ...
	I0128 10:33:07.305127    6798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 10:33:07.305296    6798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 10:33:07.305436    6798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 10:33:07.305572    6798 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 10:33:07.305859    6798 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 10:33:47.292439    6798 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 10:33:47.293514    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:33:47.293737    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:33:52.295223    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:33:52.295444    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:34:02.296052    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:34:02.296218    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:34:22.298001    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:34:22.298220    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:35:02.299557    6798 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:35:02.299792    6798 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:35:02.299807    6798 kubeadm.go:322] 
	I0128 10:35:02.299849    6798 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0128 10:35:02.299913    6798 kubeadm.go:322] 		timed out waiting for the condition
	I0128 10:35:02.299927    6798 kubeadm.go:322] 
	I0128 10:35:02.299968    6798 kubeadm.go:322] 	This error is likely caused by:
	I0128 10:35:02.300010    6798 kubeadm.go:322] 		- The kubelet is not running
	I0128 10:35:02.300108    6798 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 10:35:02.300115    6798 kubeadm.go:322] 
	I0128 10:35:02.300269    6798 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 10:35:02.300335    6798 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0128 10:35:02.300371    6798 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0128 10:35:02.300378    6798 kubeadm.go:322] 
	I0128 10:35:02.300469    6798 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 10:35:02.300564    6798 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0128 10:35:02.300580    6798 kubeadm.go:322] 
	I0128 10:35:02.300684    6798 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0128 10:35:02.300750    6798 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0128 10:35:02.300843    6798 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0128 10:35:02.300887    6798 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0128 10:35:02.300919    6798 kubeadm.go:322] 
	I0128 10:35:02.303334    6798 kubeadm.go:322] W0128 18:33:06.105604    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0128 10:35:02.303484    6798 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 10:35:02.303553    6798 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 10:35:02.303647    6798 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
	I0128 10:35:02.303742    6798 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 10:35:02.303854    6798 kubeadm.go:322] W0128 18:33:07.288195    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0128 10:35:02.303953    6798 kubeadm.go:322] W0128 18:33:07.288967    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0128 10:35:02.304025    6798 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 10:35:02.304089    6798 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 10:35:02.304123    6798 kubeadm.go:403] StartCluster complete in 3m53.80473634s
	I0128 10:35:02.304215    6798 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 10:35:02.327738    6798 logs.go:279] 0 containers: []
	W0128 10:35:02.327751    6798 logs.go:281] No container was found matching "kube-apiserver"
	I0128 10:35:02.327820    6798 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 10:35:02.350477    6798 logs.go:279] 0 containers: []
	W0128 10:35:02.350490    6798 logs.go:281] No container was found matching "etcd"
	I0128 10:35:02.350567    6798 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 10:35:02.374026    6798 logs.go:279] 0 containers: []
	W0128 10:35:02.374041    6798 logs.go:281] No container was found matching "coredns"
	I0128 10:35:02.374116    6798 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 10:35:02.396833    6798 logs.go:279] 0 containers: []
	W0128 10:35:02.396847    6798 logs.go:281] No container was found matching "kube-scheduler"
	I0128 10:35:02.396915    6798 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 10:35:02.419135    6798 logs.go:279] 0 containers: []
	W0128 10:35:02.419151    6798 logs.go:281] No container was found matching "kube-proxy"
	I0128 10:35:02.419224    6798 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 10:35:02.443742    6798 logs.go:279] 0 containers: []
	W0128 10:35:02.443756    6798 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 10:35:02.443825    6798 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 10:35:02.466974    6798 logs.go:279] 0 containers: []
	W0128 10:35:02.466987    6798 logs.go:281] No container was found matching "storage-provisioner"
	I0128 10:35:02.467054    6798 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 10:35:02.490059    6798 logs.go:279] 0 containers: []
	W0128 10:35:02.490073    6798 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 10:35:02.490080    6798 logs.go:124] Gathering logs for dmesg ...
	I0128 10:35:02.490086    6798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 10:35:02.502270    6798 logs.go:124] Gathering logs for describe nodes ...
	I0128 10:35:02.502282    6798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 10:35:02.556232    6798 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 10:35:02.556245    6798 logs.go:124] Gathering logs for Docker ...
	I0128 10:35:02.556252    6798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 10:35:02.572825    6798 logs.go:124] Gathering logs for container status ...
	I0128 10:35:02.572839    6798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 10:35:04.625514    6798 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05267043s)
	I0128 10:35:04.625644    6798 logs.go:124] Gathering logs for kubelet ...
	I0128 10:35:04.625651    6798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0128 10:35:04.663825    6798 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:33:06.105604    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:33:07.288195    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:33:07.288967    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 10:35:04.663909    6798 out.go:239] * 
	* 
	W0128 10:35:04.664173    6798 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:33:06.105604    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:33:07.288195    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:33:07.288967    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:33:06.105604    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:33:07.288195    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:33:07.288967    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 10:35:04.664189    6798 out.go:239] * 
	* 
	W0128 10:35:04.665179    6798 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 10:35:04.731571    6798 out.go:177] 
	W0128 10:35:04.774037    6798 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:33:06.105604    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:33:07.288195    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:33:07.288967    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:33:06.105604    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:33:07.288195    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:33:07.288967    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 10:35:04.774187    6798 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 10:35:04.774288    6798 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 10:35:04.817509    6798 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-390000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (255.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-390000 addons enable ingress --alsologtostderr -v=5
E0128 10:35:09.253519    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 10:36:31.176583    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-390000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.148290084s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:35:04.979155    7132 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:35:04.979406    7132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:35:04.979411    7132 out.go:309] Setting ErrFile to fd 2...
	I0128 10:35:04.979419    7132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:35:04.979532    7132 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 10:35:05.001586    7132 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0128 10:35:05.023477    7132 config.go:180] Loaded profile config "ingress-addon-legacy-390000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0128 10:35:05.023495    7132 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-390000"
	I0128 10:35:05.023505    7132 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-390000"
	I0128 10:35:05.023803    7132 host.go:66] Checking if "ingress-addon-legacy-390000" exists ...
	I0128 10:35:05.024340    7132 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-390000 --format={{.State.Status}}
	I0128 10:35:05.104873    7132 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0128 10:35:05.125873    7132 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0128 10:35:05.147148    7132 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0128 10:35:05.170965    7132 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0128 10:35:05.192217    7132 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0128 10:35:05.192257    7132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0128 10:35:05.192411    7132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:35:05.250861    7132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50669 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa Username:docker}
	I0128 10:35:05.351787    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:05.403765    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:05.403788    7132 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:05.680218    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:05.732978    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:05.732993    7132 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:06.274006    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:06.326701    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:06.326717    7132 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:06.983241    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:07.035861    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:07.035882    7132 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:07.829438    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:07.885504    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:07.885520    7132 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:09.056123    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:09.109095    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:09.109118    7132 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:11.363772    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:11.416502    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:11.416517    7132 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:13.028161    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:13.081902    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:13.081917    7132 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:15.886876    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:15.941259    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:15.941284    7132 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:19.767514    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:19.821926    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:19.821941    7132 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:27.519840    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:27.573350    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:27.573365    7132 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:42.209755    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:35:42.263150    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:35:42.263165    7132 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:10.674229    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:36:10.727850    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:10.727866    7132 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:33.897580    7132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:36:33.951190    7132 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:33.951219    7132 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-390000"
	I0128 10:36:33.972976    7132 out.go:177] * Verifying ingress addon...
	I0128 10:36:33.996452    7132 out.go:177] 
	W0128 10:36:34.018348    7132 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-390000" does not exist: client config: context "ingress-addon-legacy-390000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-390000" does not exist: client config: context "ingress-addon-legacy-390000" does not exist]
	W0128 10:36:34.018379    7132 out.go:239] * 
	* 
	W0128 10:36:34.022204    7132 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 10:36:34.043946    7132 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-390000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-390000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68",
	        "Created": "2023-01-28T18:31:02.334358637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:31:02.631020465Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/hostname",
	        "HostsPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/hosts",
	        "LogPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68-json.log",
	        "Name": "/ingress-addon-legacy-390000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-390000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-390000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-390000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-390000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-390000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-390000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-390000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5fe5ca6418fcb71d5296a3fbcafe6821774a3f6ecbab38a1750a603124c5f6b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50669"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50671"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50672"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50668"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5fe5ca6418f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-390000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "81ecd54bea96",
	                        "ingress-addon-legacy-390000"
	                    ],
	                    "NetworkID": "aca9235cbadf195019f05c183dc8328253b454a42b2f6907ed2109dc7827e5c0",
	                    "EndpointID": "037337bfd639098883c1a28ae8d85ea279aaf209e0ca0ed3103f9bab1526b047",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-390000 -n ingress-addon-legacy-390000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-390000 -n ingress-addon-legacy-390000: exit status 6 (420.021308ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:36:34.540795    7217 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-390000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-390000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-390000 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-390000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.075338777s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:36:34.606929    7229 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:36:34.607264    7229 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:36:34.607270    7229 out.go:309] Setting ErrFile to fd 2...
	I0128 10:36:34.607274    7229 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:36:34.607385    7229 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 10:36:34.629849    7229 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0128 10:36:34.651962    7229 config.go:180] Loaded profile config "ingress-addon-legacy-390000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0128 10:36:34.651999    7229 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-390000"
	I0128 10:36:34.652017    7229 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-390000"
	I0128 10:36:34.652517    7229 host.go:66] Checking if "ingress-addon-legacy-390000" exists ...
	I0128 10:36:34.653389    7229 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-390000 --format={{.State.Status}}
	I0128 10:36:34.734492    7229 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0128 10:36:34.756439    7229 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0128 10:36:34.784273    7229 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0128 10:36:34.784314    7229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0128 10:36:34.784461    7229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-390000
	I0128 10:36:34.841222    7229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50669 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/ingress-addon-legacy-390000/id_rsa Username:docker}
	I0128 10:36:34.939036    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:34.991102    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:34.991125    7229 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:35.267711    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:35.321216    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:35.321232    7229 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:35.863818    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:35.918327    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:35.918346    7229 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:36.575279    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:36.628634    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:36.628650    7229 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:37.422172    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:37.475927    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:37.475941    7229 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:38.646581    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:38.698629    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:38.698648    7229 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:40.951951    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:41.004316    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:41.004333    7229 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:42.617320    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:42.671780    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:42.671800    7229 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:45.476377    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:45.527749    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:45.527764    7229 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:49.355028    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:49.408474    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:49.408496    7229 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:57.107769    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:36:57.162684    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:36:57.162700    7229 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:37:11.799188    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:37:11.851635    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:37:11.851650    7229 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:37:40.259122    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:37:40.312318    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:37:40.312333    7229 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:38:03.480571    7229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:38:03.533464    7229 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:38:03.555452    7229 out.go:177] 
	W0128 10:38:03.578184    7229 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0128 10:38:03.578209    7229 out.go:239] * 
	* 
	W0128 10:38:03.581971    7229 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 10:38:03.603172    7229 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-390000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-390000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68",
	        "Created": "2023-01-28T18:31:02.334358637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:31:02.631020465Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/hostname",
	        "HostsPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/hosts",
	        "LogPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68-json.log",
	        "Name": "/ingress-addon-legacy-390000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-390000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-390000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-390000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-390000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-390000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-390000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-390000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5fe5ca6418fcb71d5296a3fbcafe6821774a3f6ecbab38a1750a603124c5f6b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50669"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50671"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50672"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50668"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5fe5ca6418f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-390000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "81ecd54bea96",
	                        "ingress-addon-legacy-390000"
	                    ],
	                    "NetworkID": "aca9235cbadf195019f05c183dc8328253b454a42b2f6907ed2109dc7827e5c0",
	                    "EndpointID": "037337bfd639098883c1a28ae8d85ea279aaf209e0ca0ed3103f9bab1526b047",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-390000 -n ingress-addon-legacy-390000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-390000 -n ingress-addon-legacy-390000: exit status 6 (400.619094ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:38:04.076411    7311 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-390000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-390000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-390000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-390000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68",
	        "Created": "2023-01-28T18:31:02.334358637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:31:02.631020465Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/hostname",
	        "HostsPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/hosts",
	        "LogPath": "/var/lib/docker/containers/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68/81ecd54bea96c4081b5455d66e761a35982248668e7e5abc125ae13d54b49f68-json.log",
	        "Name": "/ingress-addon-legacy-390000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-390000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-390000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4345c9efa7c1503b90cfd56fd1212b2a186e90273611ca3ce836ff3cee34a54e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-390000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-390000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-390000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-390000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-390000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5fe5ca6418fcb71d5296a3fbcafe6821774a3f6ecbab38a1750a603124c5f6b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50669"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50671"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50672"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50668"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5fe5ca6418f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-390000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "81ecd54bea96",
	                        "ingress-addon-legacy-390000"
	                    ],
	                    "NetworkID": "aca9235cbadf195019f05c183dc8328253b454a42b2f6907ed2109dc7827e5c0",
	                    "EndpointID": "037337bfd639098883c1a28ae8d85ea279aaf209e0ca0ed3103f9bab1526b047",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-390000 -n ingress-addon-legacy-390000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-390000 -n ingress-addon-legacy-390000: exit status 6 (407.178586ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:38:04.542703    7323 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-390000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-390000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3861396840.exe start -p running-upgrade-656000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3861396840.exe start -p running-upgrade-656000 --memory=2200 --vm-driver=docker : exit status 70 (52.389596285s)

                                                
                                                
-- stdout --
	! [running-upgrade-656000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig641174129
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:57:35.024724896 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-656000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:57:54.853153993 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-656000", then "minikube start -p running-upgrade-656000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 39.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.38 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 309.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 509.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:57:54.853153993 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3861396840.exe start -p running-upgrade-656000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3861396840.exe start -p running-upgrade-656000 --memory=2200 --vm-driver=docker : exit status 70 (4.376968511s)

                                                
                                                
-- stdout --
	* [running-upgrade-656000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1632039667
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-656000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3861396840.exe start -p running-upgrade-656000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3861396840.exe start -p running-upgrade-656000 --memory=2200 --vm-driver=docker : exit status 70 (4.35024296s)

                                                
                                                
-- stdout --
	* [running-upgrade-656000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1631812467
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-656000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-01-28 10:58:09.207643 -0800 PST m=+2227.165983944
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-656000
helpers_test.go:235: (dbg) docker inspect running-upgrade-656000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ceae50541cca71064faba0cbf79334836dd61d86924fda6515a5fad11bdb8189",
	        "Created": "2023-01-28T18:57:43.380468125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 173411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:57:43.621878164Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/ceae50541cca71064faba0cbf79334836dd61d86924fda6515a5fad11bdb8189/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ceae50541cca71064faba0cbf79334836dd61d86924fda6515a5fad11bdb8189/hostname",
	        "HostsPath": "/var/lib/docker/containers/ceae50541cca71064faba0cbf79334836dd61d86924fda6515a5fad11bdb8189/hosts",
	        "LogPath": "/var/lib/docker/containers/ceae50541cca71064faba0cbf79334836dd61d86924fda6515a5fad11bdb8189/ceae50541cca71064faba0cbf79334836dd61d86924fda6515a5fad11bdb8189-json.log",
	        "Name": "/running-upgrade-656000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-656000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/78897af5a9ec63c0bb458ef01ecec4ec4d549fb0deccf3aa92f9af5939d622dc-init/diff:/var/lib/docker/overlay2/3a5eb793706dab6a00e3a6337ab8693407ba67ebe159e3a8c2f7a8c0b3340a1f/diff:/var/lib/docker/overlay2/846e0f8f0eea05ce251d0675fc0f7ec6773eaad6dcf2c80a006b06096e91d410/diff:/var/lib/docker/overlay2/9e3cce176eddaf6a58a3d25d8c2736a360dbf4a7f076f8e7c16807ad98e94eec/diff:/var/lib/docker/overlay2/44e6e4a48f2d20d013c13091f07263420d5b4dd98196f93e0773eefc75b8a387/diff:/var/lib/docker/overlay2/92b81764a5a76b852fb8fb3878770999095fda715fb6e430bb2f579507afc632/diff:/var/lib/docker/overlay2/198f800f261adea911ce679a354dbaa9cb62084a71d35918f83611227e44694f/diff:/var/lib/docker/overlay2/783a607a8dc5e07072214da7acc2c6be4c0640502cf72f9a030c5fe065c878d3/diff:/var/lib/docker/overlay2/0d52374ae2c42b9bd2a2aacdb1a3deee761e5ec3d448c06f57de44c308d2793c/diff:/var/lib/docker/overlay2/ab2f10b83aa92e554730a54decc55facffdde82f1ec075d8445adff8b6063de1/diff:/var/lib/docker/overlay2/39f444
4c02e5400a72216b45baa67a66bad9bceb554a579912cc202f17ea8b01/diff:/var/lib/docker/overlay2/5543e7f0f154691a204e607d13c5f862cc3f177dc9a3bc50027ddb6dc5712041/diff:/var/lib/docker/overlay2/afa6ceca0e1983b444bae85682aa4d21531feae3761ee2832679dffbe6ad6acc/diff:/var/lib/docker/overlay2/b5038bb2502f40b48d26d2580fa219f544c6c2768992099b6ab6ef05f93cc05b/diff:/var/lib/docker/overlay2/9b8375a1f55e0d49ada7c6f60d00981de88ae6d71c60d0eb949caf6f1ca98cea/diff:/var/lib/docker/overlay2/21d9f07453ff723a425280089cb459a9c97667f97c5df73916f537833e25360d/diff:/var/lib/docker/overlay2/9b4d5fbdf578ccc75369a75f362f3e38d366badfc69db2069cdec7eee6ebbf26/diff:/var/lib/docker/overlay2/c8db01a6ee6933f0aef59444bd6932612e2cf91965c41d576d1a14bc4c5e0da5/diff:/var/lib/docker/overlay2/fb26580dd02020f332cc077879db60b14a96f2e84768b8715cb9f9af59cc725c/diff:/var/lib/docker/overlay2/b9a63932903cc05817e33921a96e8d52c020a641232546dafcd1c125006d2b64/diff:/var/lib/docker/overlay2/222f3b62658e54bcc1f4e86007bb8e6f6cdcd16279bde733a17effc95a7b24b1/diff:/var/lib/d
ocker/overlay2/286d8f56d4871fa6dfdcc1be4a016db8b231a1cdd1e9bf81d02c1957ed6c21fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78897af5a9ec63c0bb458ef01ecec4ec4d549fb0deccf3aa92f9af5939d622dc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78897af5a9ec63c0bb458ef01ecec4ec4d549fb0deccf3aa92f9af5939d622dc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78897af5a9ec63c0bb458ef01ecec4ec4d549fb0deccf3aa92f9af5939d622dc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-656000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-656000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-656000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-656000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-656000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a4dc649f1c99ec77097f84619d7ae0e05d5992eaaef481132f97277b2583f2f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52737"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52738"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52739"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2a4dc649f1c9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "63601ddc20797b3cea6747eca60daeb4e142bd235b2c913dde9ff980cac067f5",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "9bd710a2e2e93a89bc1ff3f9c3069eadb2765518501e82198f38305f5684cab6",
	                    "EndpointID": "63601ddc20797b3cea6747eca60daeb4e142bd235b2c913dde9ff980cac067f5",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-656000 -n running-upgrade-656000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-656000 -n running-upgrade-656000: exit status 6 (390.656608ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:58:09.644612   13939 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-656000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-656000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-656000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-656000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-656000: (2.354033125s)
--- FAIL: TestRunningBinaryUpgrade (67.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (557.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0128 10:59:04.179570    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:59:16.598241    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:16.604043    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:16.616117    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:16.636482    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:16.676574    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:16.756711    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:16.917589    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:17.238497    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:17.878663    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:19.159957    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:21.720694    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:26.841182    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 10:59:37.081866    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.371830979s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-510000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-510000 in cluster kubernetes-upgrade-510000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:59:03.961002   14306 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:59:03.961177   14306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:59:03.961182   14306 out.go:309] Setting ErrFile to fd 2...
	I0128 10:59:03.961186   14306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:59:03.961315   14306 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 10:59:03.961858   14306 out.go:303] Setting JSON to false
	I0128 10:59:03.980617   14306 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3518,"bootTime":1674928825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 10:59:03.980722   14306 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:59:04.002179   14306 out.go:177] * [kubernetes-upgrade-510000] minikube v1.29.0 on Darwin 13.2
	I0128 10:59:04.044023   14306 notify.go:220] Checking for updates...
	I0128 10:59:04.065798   14306 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 10:59:04.087277   14306 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 10:59:04.109208   14306 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:59:04.131180   14306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:59:04.152986   14306 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 10:59:04.174210   14306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 10:59:04.196622   14306 config.go:180] Loaded profile config "cert-expiration-293000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 10:59:04.196726   14306 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:59:04.258114   14306 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:59:04.258241   14306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:59:04.399963   14306 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 18:59:04.308668619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:59:04.421840   14306 out.go:177] * Using the docker driver based on user configuration
	I0128 10:59:04.443690   14306 start.go:296] selected driver: docker
	I0128 10:59:04.443726   14306 start.go:857] validating driver "docker" against <nil>
	I0128 10:59:04.443754   14306 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 10:59:04.447634   14306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:59:04.594220   14306 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 18:59:04.502751322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:59:04.594351   14306 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 10:59:04.594524   14306 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0128 10:59:04.618050   14306 out.go:177] * Using Docker Desktop driver with root privileges
	I0128 10:59:04.637982   14306 cni.go:84] Creating CNI manager for ""
	I0128 10:59:04.638018   14306 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 10:59:04.638036   14306 start_flags.go:319] config:
	{Name:kubernetes-upgrade-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:59:04.680955   14306 out.go:177] * Starting control plane node kubernetes-upgrade-510000 in cluster kubernetes-upgrade-510000
	I0128 10:59:04.702038   14306 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 10:59:04.723000   14306 out.go:177] * Pulling base image ...
	I0128 10:59:04.765224   14306 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:59:04.765257   14306 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 10:59:04.765331   14306 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 10:59:04.765350   14306 cache.go:57] Caching tarball of preloaded images
	I0128 10:59:04.765556   14306 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 10:59:04.765577   14306 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 10:59:04.766562   14306 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/config.json ...
	I0128 10:59:04.766713   14306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/config.json: {Name:mkf82c5d7875098d485f3af2c773600a86af4d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:59:04.822918   14306 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 10:59:04.822937   14306 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 10:59:04.822951   14306 cache.go:193] Successfully downloaded all kic artifacts
	I0128 10:59:04.822996   14306 start.go:364] acquiring machines lock for kubernetes-upgrade-510000: {Name:mkfa40a9c66407b1117a3c099684776ebeaaf6f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 10:59:04.823160   14306 start.go:368] acquired machines lock for "kubernetes-upgrade-510000" in 152.272µs
	I0128 10:59:04.823193   14306 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 10:59:04.823254   14306 start.go:125] createHost starting for "" (driver="docker")
	I0128 10:59:04.845035   14306 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0128 10:59:04.845268   14306 start.go:159] libmachine.API.Create for "kubernetes-upgrade-510000" (driver="docker")
	I0128 10:59:04.845293   14306 client.go:168] LocalClient.Create starting
	I0128 10:59:04.845385   14306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem
	I0128 10:59:04.845429   14306 main.go:141] libmachine: Decoding PEM data...
	I0128 10:59:04.845445   14306 main.go:141] libmachine: Parsing certificate...
	I0128 10:59:04.845508   14306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem
	I0128 10:59:04.845540   14306 main.go:141] libmachine: Decoding PEM data...
	I0128 10:59:04.845548   14306 main.go:141] libmachine: Parsing certificate...
	I0128 10:59:04.845984   14306 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-510000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 10:59:04.901189   14306 cli_runner.go:211] docker network inspect kubernetes-upgrade-510000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 10:59:04.901296   14306 network_create.go:281] running [docker network inspect kubernetes-upgrade-510000] to gather additional debugging logs...
	I0128 10:59:04.901308   14306 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-510000
	W0128 10:59:04.956503   14306 cli_runner.go:211] docker network inspect kubernetes-upgrade-510000 returned with exit code 1
	I0128 10:59:04.956530   14306 network_create.go:284] error running [docker network inspect kubernetes-upgrade-510000]: docker network inspect kubernetes-upgrade-510000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-510000
	I0128 10:59:04.956542   14306 network_create.go:286] output of [docker network inspect kubernetes-upgrade-510000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-510000
	
	** /stderr **
	I0128 10:59:04.956644   14306 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 10:59:05.014126   14306 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 10:59:05.015111   14306 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f52d50}
	I0128 10:59:05.015135   14306 network_create.go:123] attempt to create docker network kubernetes-upgrade-510000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0128 10:59:05.015359   14306 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000
	W0128 10:59:05.070284   14306 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000 returned with exit code 1
	W0128 10:59:05.070315   14306 network_create.go:148] failed to create docker network kubernetes-upgrade-510000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0128 10:59:05.070334   14306 network_create.go:115] failed to create docker network kubernetes-upgrade-510000 192.168.58.0/24, will retry: subnet is taken
	I0128 10:59:05.071644   14306 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 10:59:05.071956   14306 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011b9ec0}
	I0128 10:59:05.071967   14306 network_create.go:123] attempt to create docker network kubernetes-upgrade-510000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0128 10:59:05.072036   14306 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000
	W0128 10:59:05.127331   14306 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000 returned with exit code 1
	W0128 10:59:05.127363   14306 network_create.go:148] failed to create docker network kubernetes-upgrade-510000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0128 10:59:05.127379   14306 network_create.go:115] failed to create docker network kubernetes-upgrade-510000 192.168.67.0/24, will retry: subnet is taken
	I0128 10:59:05.128733   14306 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 10:59:05.129053   14306 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001114af0}
	I0128 10:59:05.129066   14306 network_create.go:123] attempt to create docker network kubernetes-upgrade-510000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0128 10:59:05.129143   14306 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000
	I0128 10:59:05.216186   14306 network_create.go:107] docker network kubernetes-upgrade-510000 192.168.76.0/24 created
	I0128 10:59:05.216223   14306 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-510000" container
	I0128 10:59:05.216354   14306 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 10:59:05.273138   14306 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-510000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 --label created_by.minikube.sigs.k8s.io=true
	I0128 10:59:05.329161   14306 oci.go:103] Successfully created a docker volume kubernetes-upgrade-510000
	I0128 10:59:05.329297   14306 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-510000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 --entrypoint /usr/bin/test -v kubernetes-upgrade-510000:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 10:59:05.924359   14306 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-510000
	I0128 10:59:05.924390   14306 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:59:05.924403   14306 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 10:59:05.924509   14306 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-510000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 10:59:11.730566   14306 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-510000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (5.806026029s)
	I0128 10:59:11.730590   14306 kic.go:199] duration metric: took 5.806242 seconds to extract preloaded images to volume
	I0128 10:59:11.730701   14306 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 10:59:11.873266   14306 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-510000 --name kubernetes-upgrade-510000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-510000 --network kubernetes-upgrade-510000 --ip 192.168.76.2 --volume kubernetes-upgrade-510000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 10:59:12.211704   14306 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Running}}
	I0128 10:59:12.276761   14306 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	I0128 10:59:12.340457   14306 cli_runner.go:164] Run: docker exec kubernetes-upgrade-510000 stat /var/lib/dpkg/alternatives/iptables
	I0128 10:59:12.456697   14306 oci.go:144] the created container "kubernetes-upgrade-510000" has a running status.
	I0128 10:59:12.456730   14306 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa...
	I0128 10:59:12.523879   14306 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 10:59:12.633452   14306 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	I0128 10:59:12.696223   14306 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 10:59:12.696242   14306 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-510000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 10:59:12.864146   14306 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	I0128 10:59:12.924296   14306 machine.go:88] provisioning docker machine ...
	I0128 10:59:12.924333   14306 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-510000"
	I0128 10:59:12.924441   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:12.983218   14306 main.go:141] libmachine: Using SSH client type: native
	I0128 10:59:12.983428   14306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52856 <nil> <nil>}
	I0128 10:59:12.983441   14306 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-510000 && echo "kubernetes-upgrade-510000" | sudo tee /etc/hostname
	I0128 10:59:13.126700   14306 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-510000
	
	I0128 10:59:13.126789   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:13.185506   14306 main.go:141] libmachine: Using SSH client type: native
	I0128 10:59:13.185681   14306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52856 <nil> <nil>}
	I0128 10:59:13.185695   14306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-510000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-510000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-510000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 10:59:13.320439   14306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 10:59:13.320463   14306 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2556/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2556/.minikube}
	I0128 10:59:13.320488   14306 ubuntu.go:177] setting up certificates
	I0128 10:59:13.320496   14306 provision.go:83] configureAuth start
	I0128 10:59:13.320579   14306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-510000
	I0128 10:59:13.379454   14306 provision.go:138] copyHostCerts
	I0128 10:59:13.379552   14306 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem, removing ...
	I0128 10:59:13.379560   14306 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 10:59:13.379689   14306 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem (1082 bytes)
	I0128 10:59:13.379892   14306 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem, removing ...
	I0128 10:59:13.379900   14306 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 10:59:13.379976   14306 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem (1123 bytes)
	I0128 10:59:13.380136   14306 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem, removing ...
	I0128 10:59:13.380142   14306 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 10:59:13.380210   14306 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem (1679 bytes)
	I0128 10:59:13.380330   14306 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-510000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-510000]
	I0128 10:59:13.574735   14306 provision.go:172] copyRemoteCerts
	I0128 10:59:13.574796   14306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 10:59:13.574847   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:13.633442   14306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52856 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 10:59:13.725489   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 10:59:13.743233   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0128 10:59:13.760763   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 10:59:13.778406   14306 provision.go:86] duration metric: configureAuth took 457.900935ms
	I0128 10:59:13.778419   14306 ubuntu.go:193] setting minikube options for container-runtime
	I0128 10:59:13.778565   14306 config.go:180] Loaded profile config "kubernetes-upgrade-510000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 10:59:13.778645   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:13.837595   14306 main.go:141] libmachine: Using SSH client type: native
	I0128 10:59:13.837762   14306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52856 <nil> <nil>}
	I0128 10:59:13.837775   14306 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 10:59:13.970499   14306 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 10:59:13.985328   14306 ubuntu.go:71] root file system type: overlay
	I0128 10:59:13.985516   14306 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 10:59:13.985650   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:14.043275   14306 main.go:141] libmachine: Using SSH client type: native
	I0128 10:59:14.043432   14306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52856 <nil> <nil>}
	I0128 10:59:14.043499   14306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 10:59:14.184679   14306 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 10:59:14.184793   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:14.244707   14306 main.go:141] libmachine: Using SSH client type: native
	I0128 10:59:14.244875   14306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52856 <nil> <nil>}
	I0128 10:59:14.244893   14306 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 10:59:14.889893   14306 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:59:14.182262809 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 10:59:14.889918   14306 machine.go:91] provisioned docker machine in 1.965620786s
	I0128 10:59:14.889924   14306 client.go:171] LocalClient.Create took 10.044722224s
	I0128 10:59:14.889962   14306 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-510000" took 10.044781451s
	I0128 10:59:14.889973   14306 start.go:300] post-start starting for "kubernetes-upgrade-510000" (driver="docker")
	I0128 10:59:14.889980   14306 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 10:59:14.890047   14306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 10:59:14.890104   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:14.949753   14306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52856 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 10:59:15.044654   14306 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 10:59:15.048463   14306 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 10:59:15.048481   14306 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 10:59:15.048489   14306 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 10:59:15.048496   14306 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 10:59:15.048507   14306 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/addons for local assets ...
	I0128 10:59:15.048621   14306 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/files for local assets ...
	I0128 10:59:15.048807   14306 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> 38492.pem in /etc/ssl/certs
	I0128 10:59:15.049009   14306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 10:59:15.056461   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /etc/ssl/certs/38492.pem (1708 bytes)
	I0128 10:59:15.073632   14306 start.go:303] post-start completed in 183.651598ms
	I0128 10:59:15.074150   14306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-510000
	I0128 10:59:15.132882   14306 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/config.json ...
	I0128 10:59:15.133314   14306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 10:59:15.133417   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:15.191665   14306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52856 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 10:59:15.285045   14306 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 10:59:15.289661   14306 start.go:128] duration metric: createHost completed in 10.466492998s
	I0128 10:59:15.289683   14306 start.go:83] releasing machines lock for "kubernetes-upgrade-510000", held for 10.466611895s
	I0128 10:59:15.289774   14306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-510000
	I0128 10:59:15.350328   14306 ssh_runner.go:195] Run: cat /version.json
	I0128 10:59:15.350347   14306 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0128 10:59:15.350399   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:15.350427   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:15.417103   14306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52856 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 10:59:15.417396   14306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52856 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 10:59:15.509736   14306 ssh_runner.go:195] Run: systemctl --version
	I0128 10:59:15.720044   14306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 10:59:15.725405   14306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 10:59:15.745732   14306 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 10:59:15.745806   14306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 10:59:15.759667   14306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 10:59:15.767386   14306 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 10:59:15.767412   14306 start.go:483] detecting cgroup driver to use...
	I0128 10:59:15.767425   14306 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 10:59:15.767542   14306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 10:59:15.780813   14306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0128 10:59:15.789576   14306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 10:59:15.798340   14306 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 10:59:15.798415   14306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 10:59:15.807530   14306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 10:59:15.815846   14306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 10:59:15.824332   14306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 10:59:15.832957   14306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 10:59:15.840912   14306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 10:59:15.849422   14306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 10:59:15.856780   14306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 10:59:15.864139   14306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 10:59:15.928619   14306 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 10:59:16.000821   14306 start.go:483] detecting cgroup driver to use...
	I0128 10:59:16.000841   14306 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 10:59:16.000913   14306 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 10:59:16.014916   14306 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 10:59:16.014992   14306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 10:59:16.025917   14306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 10:59:16.040666   14306 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 10:59:16.104406   14306 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 10:59:16.196849   14306 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 10:59:16.196869   14306 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 10:59:16.210943   14306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 10:59:16.306214   14306 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 10:59:16.517937   14306 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 10:59:16.549092   14306 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 10:59:16.624467   14306 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0128 10:59:16.624665   14306 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-510000 dig +short host.docker.internal
	I0128 10:59:16.741479   14306 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 10:59:16.741581   14306 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 10:59:16.746054   14306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 10:59:16.756167   14306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 10:59:16.815979   14306 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:59:16.816052   14306 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 10:59:16.840836   14306 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 10:59:16.840856   14306 docker.go:560] Images already preloaded, skipping extraction
	I0128 10:59:16.840927   14306 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 10:59:16.864974   14306 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 10:59:16.864993   14306 cache_images.go:84] Images are preloaded, skipping loading
	I0128 10:59:16.865074   14306 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 10:59:16.938201   14306 cni.go:84] Creating CNI manager for ""
	I0128 10:59:16.938220   14306 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 10:59:16.938237   14306 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 10:59:16.938255   14306 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-510000 NodeName:kubernetes-upgrade-510000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 10:59:16.938385   14306 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-510000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-510000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 10:59:16.938466   14306 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-510000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 10:59:16.938538   14306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0128 10:59:16.946979   14306 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 10:59:16.947035   14306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 10:59:16.954611   14306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0128 10:59:16.968181   14306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 10:59:16.981570   14306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0128 10:59:16.994965   14306 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0128 10:59:16.999675   14306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 10:59:17.009753   14306 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000 for IP: 192.168.76.2
	I0128 10:59:17.009773   14306 certs.go:186] acquiring lock for shared ca certs: {Name:mkee0a6d4b79657122da9b64494daa75cd779ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:59:17.009957   14306 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key
	I0128 10:59:17.010026   14306 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key
	I0128 10:59:17.010066   14306 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.key
	I0128 10:59:17.010081   14306 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.crt with IP's: []
	I0128 10:59:17.243370   14306 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.crt ...
	I0128 10:59:17.243389   14306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.crt: {Name:mk9a6346c3da4d63f2aad9c25d4a7c4d1ed7bef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:59:17.243691   14306 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.key ...
	I0128 10:59:17.243699   14306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.key: {Name:mkc820a52e21ad2e8b2e2eff8d5577763f239036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:59:17.243897   14306 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.key.31bdca25
	I0128 10:59:17.243911   14306 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 10:59:17.301028   14306 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.crt.31bdca25 ...
	I0128 10:59:17.301038   14306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.crt.31bdca25: {Name:mk4633e1c54fc698fe05e12618abcb5e000c75e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:59:17.301323   14306 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.key.31bdca25 ...
	I0128 10:59:17.301331   14306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.key.31bdca25: {Name:mk2fe8ae7938d2033d066a01e102dd56e5270099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:59:17.301517   14306 certs.go:333] copying /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.crt
	I0128 10:59:17.301681   14306 certs.go:337] copying /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.key
	I0128 10:59:17.301837   14306 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.key
	I0128 10:59:17.301853   14306 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.crt with IP's: []
	I0128 10:59:17.414069   14306 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.crt ...
	I0128 10:59:17.414080   14306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.crt: {Name:mk48f1de0fbd75cb0e60d5df5adc83cafe525e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:59:17.414300   14306 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.key ...
	I0128 10:59:17.414307   14306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.key: {Name:mkc3ddc9cfe61a3d5f0a3841501dfd0012725a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:59:17.414679   14306 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem (1338 bytes)
	W0128 10:59:17.414728   14306 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849_empty.pem, impossibly tiny 0 bytes
	I0128 10:59:17.414739   14306 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 10:59:17.414770   14306 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem (1082 bytes)
	I0128 10:59:17.414804   14306 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem (1123 bytes)
	I0128 10:59:17.414835   14306 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem (1679 bytes)
	I0128 10:59:17.414903   14306 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem (1708 bytes)
	I0128 10:59:17.415397   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 10:59:17.434926   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 10:59:17.452481   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 10:59:17.469980   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0128 10:59:17.487731   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 10:59:17.504869   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 10:59:17.522157   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 10:59:17.539439   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 10:59:17.556786   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem --> /usr/share/ca-certificates/3849.pem (1338 bytes)
	I0128 10:59:17.576264   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /usr/share/ca-certificates/38492.pem (1708 bytes)
	I0128 10:59:17.594427   14306 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 10:59:17.612413   14306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 10:59:17.625465   14306 ssh_runner.go:195] Run: openssl version
	I0128 10:59:17.631778   14306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38492.pem && ln -fs /usr/share/ca-certificates/38492.pem /etc/ssl/certs/38492.pem"
	I0128 10:59:17.640773   14306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38492.pem
	I0128 10:59:17.644808   14306 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:26 /usr/share/ca-certificates/38492.pem
	I0128 10:59:17.644860   14306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38492.pem
	I0128 10:59:17.650645   14306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38492.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 10:59:17.659533   14306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 10:59:17.669735   14306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:59:17.674426   14306 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:59:17.674474   14306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:59:17.679994   14306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 10:59:17.688208   14306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3849.pem && ln -fs /usr/share/ca-certificates/3849.pem /etc/ssl/certs/3849.pem"
	I0128 10:59:17.696314   14306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3849.pem
	I0128 10:59:17.700534   14306 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:26 /usr/share/ca-certificates/3849.pem
	I0128 10:59:17.700588   14306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3849.pem
	I0128 10:59:17.706152   14306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3849.pem /etc/ssl/certs/51391683.0"
	I0128 10:59:17.714440   14306 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:59:17.714546   14306 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 10:59:17.736630   14306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 10:59:17.744702   14306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 10:59:17.752581   14306 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 10:59:17.752638   14306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 10:59:17.760361   14306 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 10:59:17.760385   14306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 10:59:17.808276   14306 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 10:59:17.808886   14306 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 10:59:18.111893   14306 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 10:59:18.111990   14306 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 10:59:18.112096   14306 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 10:59:18.339313   14306 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 10:59:18.340100   14306 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 10:59:18.346581   14306 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 10:59:18.414983   14306 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 10:59:18.457254   14306 out.go:204]   - Generating certificates and keys ...
	I0128 10:59:18.457365   14306 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 10:59:18.457487   14306 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 10:59:18.570823   14306 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 10:59:18.660554   14306 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 10:59:18.916419   14306 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 10:59:19.024348   14306 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 10:59:19.081746   14306 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 10:59:19.082334   14306 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-510000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0128 10:59:19.188685   14306 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 10:59:19.188791   14306 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-510000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0128 10:59:19.311766   14306 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 10:59:19.470974   14306 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 10:59:19.616946   14306 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 10:59:19.617003   14306 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 10:59:19.722459   14306 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 10:59:19.769564   14306 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 10:59:19.916650   14306 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 10:59:20.087889   14306 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 10:59:20.088599   14306 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 10:59:20.110509   14306 out.go:204]   - Booting up control plane ...
	I0128 10:59:20.110711   14306 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 10:59:20.110840   14306 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 10:59:20.110960   14306 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 10:59:20.111095   14306 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 10:59:20.111333   14306 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:00:00.097271   14306 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:00:00.097950   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:00:00.098253   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:00:05.099814   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:00:05.100039   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:00:15.101309   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:00:15.101544   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:00:35.101625   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:00:35.101858   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:01:15.102117   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:01:15.102280   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:01:15.102303   14306 kubeadm.go:322] 
	I0128 11:01:15.102349   14306 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:01:15.102399   14306 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:01:15.102421   14306 kubeadm.go:322] 
	I0128 11:01:15.102466   14306 kubeadm.go:322] This error is likely caused by:
	I0128 11:01:15.102502   14306 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:01:15.102576   14306 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:01:15.102581   14306 kubeadm.go:322] 
	I0128 11:01:15.102668   14306 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:01:15.102728   14306 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:01:15.102756   14306 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:01:15.102764   14306 kubeadm.go:322] 
	I0128 11:01:15.102855   14306 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:01:15.102936   14306 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:01:15.103011   14306 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:01:15.103050   14306 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:01:15.103108   14306 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:01:15.103131   14306 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:01:15.105741   14306 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:01:15.105814   14306 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:01:15.105950   14306 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:01:15.106038   14306 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:01:15.106106   14306 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:01:15.106170   14306 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 11:01:15.106317   14306 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-510000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-510000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-510000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-510000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 11:01:15.106345   14306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:01:15.527463   14306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:01:15.538890   14306 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:01:15.538968   14306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:01:15.548039   14306 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:01:15.548086   14306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:01:15.600598   14306 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:01:15.600659   14306 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:01:15.933816   14306 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:01:15.933916   14306 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:01:15.934005   14306 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:01:16.183576   14306 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:01:16.184619   14306 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:01:16.191605   14306 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:01:16.262709   14306 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:01:16.284281   14306 out.go:204]   - Generating certificates and keys ...
	I0128 11:01:16.284358   14306 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:01:16.284431   14306 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:01:16.284517   14306 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:01:16.284589   14306 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:01:16.284705   14306 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:01:16.284794   14306 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:01:16.284878   14306 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:01:16.284961   14306 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:01:16.285078   14306 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:01:16.285213   14306 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:01:16.285255   14306 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:01:16.285307   14306 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:01:16.365829   14306 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:01:16.461744   14306 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:01:16.529359   14306 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:01:16.653062   14306 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:01:16.653686   14306 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:01:16.675326   14306 out.go:204]   - Booting up control plane ...
	I0128 11:01:16.675479   14306 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:01:16.675578   14306 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:01:16.675676   14306 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:01:16.675772   14306 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:01:16.675952   14306 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:01:56.663342   14306 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:01:56.663884   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:01:56.664034   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:02:01.665201   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:02:01.665409   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:02:11.666319   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:02:11.666533   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:02:31.667021   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:02:31.667220   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:03:11.668349   14306 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:03:11.668591   14306 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:03:11.668604   14306 kubeadm.go:322] 
	I0128 11:03:11.668650   14306 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:03:11.668698   14306 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:03:11.668707   14306 kubeadm.go:322] 
	I0128 11:03:11.668758   14306 kubeadm.go:322] This error is likely caused by:
	I0128 11:03:11.668803   14306 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:03:11.668925   14306 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:03:11.668940   14306 kubeadm.go:322] 
	I0128 11:03:11.669065   14306 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:03:11.669103   14306 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:03:11.669138   14306 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:03:11.669146   14306 kubeadm.go:322] 
	I0128 11:03:11.669256   14306 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:03:11.669370   14306 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:03:11.669462   14306 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:03:11.669525   14306 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:03:11.669625   14306 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:03:11.669662   14306 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:03:11.672324   14306 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:03:11.672394   14306 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:03:11.672509   14306 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:03:11.672600   14306 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:03:11.672669   14306 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:03:11.672740   14306 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 11:03:11.672754   14306 kubeadm.go:403] StartCluster complete in 3m53.960545745s
	I0128 11:03:11.672845   14306 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:03:11.695818   14306 logs.go:279] 0 containers: []
	W0128 11:03:11.695830   14306 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:03:11.695899   14306 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:03:11.718623   14306 logs.go:279] 0 containers: []
	W0128 11:03:11.718638   14306 logs.go:281] No container was found matching "etcd"
	I0128 11:03:11.718706   14306 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:03:11.742481   14306 logs.go:279] 0 containers: []
	W0128 11:03:11.742493   14306 logs.go:281] No container was found matching "coredns"
	I0128 11:03:11.742561   14306 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:03:11.766480   14306 logs.go:279] 0 containers: []
	W0128 11:03:11.766495   14306 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:03:11.766562   14306 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:03:11.790201   14306 logs.go:279] 0 containers: []
	W0128 11:03:11.790215   14306 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:03:11.790285   14306 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:03:11.813831   14306 logs.go:279] 0 containers: []
	W0128 11:03:11.813844   14306 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:03:11.813919   14306 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:03:11.836209   14306 logs.go:279] 0 containers: []
	W0128 11:03:11.836223   14306 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:03:11.836294   14306 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:03:11.859549   14306 logs.go:279] 0 containers: []
	W0128 11:03:11.859562   14306 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:03:11.859569   14306 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:03:11.859575   14306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:03:11.914126   14306 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:03:11.914140   14306 logs.go:124] Gathering logs for Docker ...
	I0128 11:03:11.914147   14306 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:03:11.931974   14306 logs.go:124] Gathering logs for container status ...
	I0128 11:03:11.931989   14306 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:03:13.979166   14306 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04718384s)
	I0128 11:03:13.980755   14306 logs.go:124] Gathering logs for kubelet ...
	I0128 11:03:13.980765   14306 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:03:14.018799   14306 logs.go:124] Gathering logs for dmesg ...
	I0128 11:03:14.018813   14306 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0128 11:03:14.032727   14306 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 11:03:14.032747   14306 out.go:239] * 
	* 
	W0128 11:03:14.032850   14306 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:03:14.032864   14306 out.go:239] * 
	* 
	W0128 11:03:14.033485   14306 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 11:03:14.077870   14306 out.go:177] 
	W0128 11:03:14.136188   14306 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:03:14.136370   14306 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 11:03:14.136468   14306 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 11:03:14.193969   14306 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-510000

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-510000: (1.692650651s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-510000 status --format={{.Host}}

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-510000 status --format={{.Host}}: exit status 7 (130.993581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (4m38.631715778s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-510000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (702.635451ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-510000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-510000
	    minikube start -p kubernetes-upgrade-510000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5100002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-510000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (18.894082812s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-01-28 11:08:14.405406 -0800 PST m=+2832.369508699
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-510000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-510000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27bd1f9ea20ad146dabb4f3cca9ad498adfdcd1d85a6ba27206e332c10272e1c",
	        "Created": "2023-01-28T18:59:11.927845923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197515,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:03:17.900594056Z",
	            "FinishedAt": "2023-01-28T19:03:14.828238911Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/27bd1f9ea20ad146dabb4f3cca9ad498adfdcd1d85a6ba27206e332c10272e1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27bd1f9ea20ad146dabb4f3cca9ad498adfdcd1d85a6ba27206e332c10272e1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/27bd1f9ea20ad146dabb4f3cca9ad498adfdcd1d85a6ba27206e332c10272e1c/hosts",
	        "LogPath": "/var/lib/docker/containers/27bd1f9ea20ad146dabb4f3cca9ad498adfdcd1d85a6ba27206e332c10272e1c/27bd1f9ea20ad146dabb4f3cca9ad498adfdcd1d85a6ba27206e332c10272e1c-json.log",
	        "Name": "/kubernetes-upgrade-510000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-510000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-510000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8da72aeba13b48fae3fcfe24c4e1dc8329747d75c88a3f5f99d26359424d0b44-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8da72aeba13b48fae3fcfe24c4e1dc8329747d75c88a3f5f99d26359424d0b44/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8da72aeba13b48fae3fcfe24c4e1dc8329747d75c88a3f5f99d26359424d0b44/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8da72aeba13b48fae3fcfe24c4e1dc8329747d75c88a3f5f99d26359424d0b44/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-510000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-510000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-510000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-510000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-510000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8e69e6f8a052f743c5793a73d6e046a901d918e0a87b202e45ccebf88fce766",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53081"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e8e69e6f8a05",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-510000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "27bd1f9ea20a",
	                        "kubernetes-upgrade-510000"
	                    ],
	                    "NetworkID": "8f5fd0c8fa9271b729e208d97cb70f659c77796513685b0428136c96e3f87b79",
	                    "EndpointID": "9ad9dd8158bdca97953a185116ea431d24b4aff956e0c96f6ce8dd02fb73e20a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-510000 -n kubernetes-upgrade-510000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-510000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-510000 logs -n 25: (2.811323951s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-360000 sudo cat                              | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo cat                              | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo systemctl                        | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo systemctl                        | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo cat                              | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo docker                           | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo systemctl                        | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo systemctl                        | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo cat                              | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo cat                              | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo                                  | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo systemctl                        | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo systemctl                        | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo cat                              | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo cat                              | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo containerd                       | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo systemctl                        | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST |                     |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo systemctl                        | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo find                             | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-360000 sudo crio                             | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-360000                                       | auto-360000               | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:06 PST |
	| start   | -p calico-360000 --memory=3072                       | calico-360000             | jenkins | v1.29.0 | 28 Jan 23 11:06 PST | 28 Jan 23 11:07 PST |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=calico --driver=docker                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-510000                         | kubernetes-upgrade-510000 | jenkins | v1.29.0 | 28 Jan 23 11:07 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-510000                         | kubernetes-upgrade-510000 | jenkins | v1.29.0 | 28 Jan 23 11:07 PST | 28 Jan 23 11:08 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| ssh     | -p calico-360000 pgrep -a                            | calico-360000             | jenkins | v1.29.0 | 28 Jan 23 11:07 PST | 28 Jan 23 11:07 PST |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 11:07:55
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 11:07:55.574191   16796 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:07:55.574352   16796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:07:55.574357   16796 out.go:309] Setting ErrFile to fd 2...
	I0128 11:07:55.574361   16796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:07:55.574473   16796 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 11:07:55.574956   16796 out.go:303] Setting JSON to false
	I0128 11:07:55.593870   16796 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4050,"bootTime":1674928825,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 11:07:55.593957   16796 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:07:55.616664   16796 out.go:177] * [kubernetes-upgrade-510000] minikube v1.29.0 on Darwin 13.2
	I0128 11:07:55.638041   16796 notify.go:220] Checking for updates...
	I0128 11:07:55.659062   16796 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:07:55.680233   16796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:07:55.701122   16796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:07:55.722270   16796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:07:55.743105   16796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 11:07:55.763971   16796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:07:55.785735   16796 config.go:180] Loaded profile config "kubernetes-upgrade-510000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:07:55.786412   16796 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:07:55.856074   16796 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:07:55.856218   16796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:07:56.010218   16796 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:61 SystemTime:2023-01-28 19:07:55.910518943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:07:56.032129   16796 out.go:177] * Using the docker driver based on existing profile
	I0128 11:07:56.053604   16796 start.go:296] selected driver: docker
	I0128 11:07:56.053623   16796 start.go:857] validating driver "docker" against &{Name:kubernetes-upgrade-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:07:56.053730   16796 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:07:56.057021   16796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:07:56.207875   16796 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:61 SystemTime:2023-01-28 19:07:56.1110245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/
Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:07:56.208023   16796 cni.go:84] Creating CNI manager for ""
	I0128 11:07:56.208036   16796 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:07:56.208051   16796 start_flags.go:319] config:
	{Name:kubernetes-upgrade-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:07:56.250550   16796 out.go:177] * Starting control plane node kubernetes-upgrade-510000 in cluster kubernetes-upgrade-510000
	I0128 11:07:56.287463   16796 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:07:56.308676   16796 out.go:177] * Pulling base image ...
	I0128 11:07:56.366432   16796 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:07:56.366474   16796 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:07:56.366499   16796 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 11:07:56.366509   16796 cache.go:57] Caching tarball of preloaded images
	I0128 11:07:56.366652   16796 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:07:56.366669   16796 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 11:07:56.367174   16796 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/config.json ...
	I0128 11:07:56.425109   16796 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:07:56.425130   16796 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:07:56.425164   16796 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:07:56.425210   16796 start.go:364] acquiring machines lock for kubernetes-upgrade-510000: {Name:mkfa40a9c66407b1117a3c099684776ebeaaf6f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:07:56.425313   16796 start.go:368] acquired machines lock for "kubernetes-upgrade-510000" in 77.178µs
	I0128 11:07:56.425337   16796 start.go:96] Skipping create...Using existing machine configuration
	I0128 11:07:56.425346   16796 fix.go:55] fixHost starting: 
	I0128 11:07:56.425601   16796 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	I0128 11:07:56.490928   16796 fix.go:103] recreateIfNeeded on kubernetes-upgrade-510000: state=Running err=<nil>
	W0128 11:07:56.490958   16796 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 11:07:56.533657   16796 out.go:177] * Updating the running docker "kubernetes-upgrade-510000" container ...
	I0128 11:07:56.554816   16796 machine.go:88] provisioning docker machine ...
	I0128 11:07:56.554871   16796 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-510000"
	I0128 11:07:56.555029   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:56.618855   16796 main.go:141] libmachine: Using SSH client type: native
	I0128 11:07:56.619057   16796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53082 <nil> <nil>}
	I0128 11:07:56.619075   16796 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-510000 && echo "kubernetes-upgrade-510000" | sudo tee /etc/hostname
	I0128 11:07:56.763996   16796 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-510000
	
	I0128 11:07:56.764108   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:56.825766   16796 main.go:141] libmachine: Using SSH client type: native
	I0128 11:07:56.825931   16796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53082 <nil> <nil>}
	I0128 11:07:56.825948   16796 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-510000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-510000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-510000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:07:56.960329   16796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:07:56.960352   16796 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2556/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2556/.minikube}
	I0128 11:07:56.960373   16796 ubuntu.go:177] setting up certificates
	I0128 11:07:56.960384   16796 provision.go:83] configureAuth start
	I0128 11:07:56.960469   16796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-510000
	I0128 11:07:57.020362   16796 provision.go:138] copyHostCerts
	I0128 11:07:57.020464   16796 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem, removing ...
	I0128 11:07:57.020474   16796 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 11:07:57.020594   16796 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem (1082 bytes)
	I0128 11:07:57.020853   16796 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem, removing ...
	I0128 11:07:57.020860   16796 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 11:07:57.020923   16796 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem (1123 bytes)
	I0128 11:07:57.021100   16796 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem, removing ...
	I0128 11:07:57.021106   16796 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 11:07:57.021167   16796 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem (1679 bytes)
	I0128 11:07:57.021300   16796 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-510000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-510000]
	I0128 11:07:57.246630   16796 provision.go:172] copyRemoteCerts
	I0128 11:07:57.246712   16796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:07:57.246767   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:57.308479   16796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53082 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 11:07:57.401797   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:07:57.419970   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0128 11:07:57.439283   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 11:07:57.457487   16796 provision.go:86] duration metric: configureAuth took 497.091795ms
	I0128 11:07:57.457501   16796 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:07:57.457654   16796 config.go:180] Loaded profile config "kubernetes-upgrade-510000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:07:57.457719   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:57.518645   16796 main.go:141] libmachine: Using SSH client type: native
	I0128 11:07:57.518825   16796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53082 <nil> <nil>}
	I0128 11:07:57.518835   16796 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:07:57.654189   16796 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:07:57.654205   16796 ubuntu.go:71] root file system type: overlay
	I0128 11:07:57.654391   16796 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:07:57.654478   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:57.715177   16796 main.go:141] libmachine: Using SSH client type: native
	I0128 11:07:57.715358   16796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53082 <nil> <nil>}
	I0128 11:07:57.715409   16796 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:07:57.859518   16796 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:07:57.859609   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:57.920434   16796 main.go:141] libmachine: Using SSH client type: native
	I0128 11:07:57.920590   16796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53082 <nil> <nil>}
	I0128 11:07:57.920604   16796 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:07:58.058467   16796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:07:58.058484   16796 machine.go:91] provisioned docker machine in 1.503665531s
	I0128 11:07:58.058491   16796 start.go:300] post-start starting for "kubernetes-upgrade-510000" (driver="docker")
	I0128 11:07:58.058497   16796 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:07:58.058569   16796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:07:58.058623   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:58.119892   16796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53082 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 11:07:58.213406   16796 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:07:58.217606   16796 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:07:58.217626   16796 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:07:58.217636   16796 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:07:58.217642   16796 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:07:58.217649   16796 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/addons for local assets ...
	I0128 11:07:58.217737   16796 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/files for local assets ...
	I0128 11:07:58.217898   16796 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> 38492.pem in /etc/ssl/certs
	I0128 11:07:58.218065   16796 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:07:58.225805   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:07:58.243570   16796 start.go:303] post-start completed in 185.069778ms
	I0128 11:07:58.243678   16796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:07:58.243759   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:58.304590   16796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53082 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 11:07:58.395245   16796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:07:58.400352   16796 fix.go:57] fixHost completed within 1.975022568s
	I0128 11:07:58.400367   16796 start.go:83] releasing machines lock for "kubernetes-upgrade-510000", held for 1.975066077s
	I0128 11:07:58.400461   16796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-510000
	I0128 11:07:58.461101   16796 ssh_runner.go:195] Run: cat /version.json
	I0128 11:07:58.461104   16796 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:07:58.461167   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:58.461181   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:07:58.526832   16796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53082 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 11:07:58.527036   16796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53082 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 11:07:58.682371   16796 ssh_runner.go:195] Run: systemctl --version
	I0128 11:07:58.687222   16796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0128 11:07:58.692508   16796 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0128 11:07:58.692626   16796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:07:58.700442   16796 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:07:58.713982   16796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 11:07:58.722807   16796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 11:07:58.730616   16796 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0128 11:07:58.730633   16796 start.go:483] detecting cgroup driver to use...
	I0128 11:07:58.730646   16796 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:07:58.730807   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:07:58.745671   16796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:07:58.756841   16796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:07:58.767396   16796 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:07:58.767457   16796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:07:58.777192   16796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:07:58.786803   16796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:07:58.795846   16796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:07:58.805348   16796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:07:58.814104   16796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:07:58.823435   16796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:07:58.831220   16796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:07:58.839370   16796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:07:58.927942   16796 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:08:00.012013   16796 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (1.084056828s)
	I0128 11:08:00.012028   16796 start.go:483] detecting cgroup driver to use...
	I0128 11:08:00.012047   16796 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:08:00.012116   16796 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:08:00.025651   16796 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:08:00.025729   16796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:08:00.038716   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:08:00.060411   16796 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:08:00.161229   16796 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:08:00.256321   16796 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:08:00.256339   16796 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:08:00.270984   16796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:08:00.386377   16796 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:08:00.775541   16796 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:08:00.846282   16796 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:08:00.977005   16796 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:08:01.166127   16796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:08:01.373606   16796 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:08:01.462339   16796 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:08:01.462442   16796 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:08:01.470428   16796 start.go:551] Will wait 60s for crictl version
	I0128 11:08:01.470523   16796 ssh_runner.go:195] Run: which crictl
	I0128 11:08:01.480491   16796 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:08:01.855850   16796 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:08:01.855947   16796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:08:01.963072   16796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:08:02.089379   16796 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:08:02.089516   16796 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-510000 dig +short host.docker.internal
	I0128 11:08:02.255762   16796 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:08:02.255894   16796 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:08:02.261502   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:08:02.326745   16796 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:08:02.326818   16796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:08:02.363465   16796 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:08:02.363487   16796 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:08:02.363626   16796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:08:02.458090   16796 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:08:02.458116   16796 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:08:02.458220   16796 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:08:02.574727   16796 cni.go:84] Creating CNI manager for ""
	I0128 11:08:02.574746   16796 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:08:02.574766   16796 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:08:02.574783   16796 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-510000 NodeName:kubernetes-upgrade-510000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:08:02.574925   16796 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-510000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:08:02.575017   16796 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-510000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:08:02.575087   16796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:08:02.584417   16796 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:08:02.584487   16796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:08:02.592985   16796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (457 bytes)
	I0128 11:08:02.609016   16796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:08:02.638562   16796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0128 11:08:02.655094   16796 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:08:02.660064   16796 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000 for IP: 192.168.76.2
	I0128 11:08:02.660084   16796 certs.go:186] acquiring lock for shared ca certs: {Name:mkee0a6d4b79657122da9b64494daa75cd779ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:08:02.660269   16796 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key
	I0128 11:08:02.660353   16796 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key
	I0128 11:08:02.660468   16796 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.key
	I0128 11:08:02.660575   16796 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.key.31bdca25
	I0128 11:08:02.660665   16796 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.key
	I0128 11:08:02.660907   16796 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem (1338 bytes)
	W0128 11:08:02.660949   16796 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849_empty.pem, impossibly tiny 0 bytes
	I0128 11:08:02.660961   16796 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 11:08:02.661001   16796 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:08:02.661041   16796 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:08:02.661078   16796 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem (1679 bytes)
	I0128 11:08:02.661155   16796 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:08:02.661744   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:08:02.686531   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 11:08:02.709620   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:08:02.728533   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0128 11:08:02.758576   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:08:02.787996   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 11:08:02.812139   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:08:02.861998   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 11:08:02.884204   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem --> /usr/share/ca-certificates/3849.pem (1338 bytes)
	I0128 11:08:02.902343   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /usr/share/ca-certificates/38492.pem (1708 bytes)
	I0128 11:08:02.920700   16796 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:08:02.938322   16796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 11:08:02.952005   16796 ssh_runner.go:195] Run: openssl version
	I0128 11:08:02.958722   16796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3849.pem && ln -fs /usr/share/ca-certificates/3849.pem /etc/ssl/certs/3849.pem"
	I0128 11:08:02.968040   16796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3849.pem
	I0128 11:08:02.972426   16796 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:26 /usr/share/ca-certificates/3849.pem
	I0128 11:08:02.972484   16796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3849.pem
	I0128 11:08:02.978701   16796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3849.pem /etc/ssl/certs/51391683.0"
	I0128 11:08:02.987209   16796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38492.pem && ln -fs /usr/share/ca-certificates/38492.pem /etc/ssl/certs/38492.pem"
	I0128 11:08:02.996124   16796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38492.pem
	I0128 11:08:03.000452   16796 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:26 /usr/share/ca-certificates/38492.pem
	I0128 11:08:03.000497   16796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38492.pem
	I0128 11:08:03.006165   16796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38492.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:08:03.014240   16796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:08:03.023402   16796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:08:03.027584   16796 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:08:03.027641   16796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:08:03.033704   16796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:08:03.041485   16796 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:08:03.041590   16796 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:08:03.067686   16796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:08:03.076138   16796 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:08:03.076162   16796 kubeadm.go:633] restartCluster start
	I0128 11:08:03.076221   16796 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:08:03.084823   16796 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:08:03.084909   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:08:03.146020   16796 kubeconfig.go:92] found "kubernetes-upgrade-510000" server: "https://127.0.0.1:53081"
	I0128 11:08:03.146854   16796 kapi.go:59] client config for kubernetes-upgrade-510000: &rest.Config{Host:"https://127.0.0.1:53081", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 11:08:03.147440   16796 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:08:03.155309   16796 api_server.go:165] Checking apiserver status ...
	I0128 11:08:03.155383   16796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:08:03.164883   16796 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12632/cgroup
	W0128 11:08:03.173446   16796 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12632/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:08:03.173511   16796 ssh_runner.go:195] Run: ls
	I0128 11:08:03.177952   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:04.661031   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:08:04.661067   16796 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:53081/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:08:04.925644   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:04.932427   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:08:04.932448   16796 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:53081/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:08:05.315858   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:05.322404   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:08:05.322423   16796 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:53081/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:08:05.745249   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:05.750706   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:08:05.750724   16796 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:53081/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:08:06.224842   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:06.231625   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 200:
	ok
	I0128 11:08:06.243361   16796 system_pods.go:86] 5 kube-system pods found
	I0128 11:08:06.243378   16796 system_pods.go:89] "etcd-kubernetes-upgrade-510000" [bcaf9f74-48f0-4b50-986e-9a0b8b7f875d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:08:06.243384   16796 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-510000" [4c86e68f-aa0b-49be-8c45-f944175355ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:08:06.243393   16796 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-510000" [e8ceb547-5f56-4dfc-b126-602c70d81ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0128 11:08:06.243400   16796 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-510000" [ebba30e0-90b2-4e9c-8939-457a92d58b92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:08:06.243406   16796 system_pods.go:89] "storage-provisioner" [ba350de1-6a66-4677-9a69-dfe343b644cd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0128 11:08:06.243410   16796 kubeadm.go:617] needs reconfigure: missing components: kube-dns, kube-proxy
	I0128 11:08:06.243418   16796 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:08:06.243485   16796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:08:06.269454   16796 docker.go:456] Stopping containers: [2d800d7f32b3 172917e17f02 2727b66a847a 23b586b8ce6f 5381399361b4 6648f8d8ae97 3758790638f7 dec6fd34a50d 74f7a6bd90b2 6fb0ed437a69 20c8a382aed2 15e5321836bc 4532e4b1ad35 2422e61102ae 9b5215b8a25e e0b606a15c1c 5a6183a0d01c]
	I0128 11:08:06.269559   16796 ssh_runner.go:195] Run: docker stop 2d800d7f32b3 172917e17f02 2727b66a847a 23b586b8ce6f 5381399361b4 6648f8d8ae97 3758790638f7 dec6fd34a50d 74f7a6bd90b2 6fb0ed437a69 20c8a382aed2 15e5321836bc 4532e4b1ad35 2422e61102ae 9b5215b8a25e e0b606a15c1c 5a6183a0d01c
	I0128 11:08:07.343546   16796 ssh_runner.go:235] Completed: docker stop 2d800d7f32b3 172917e17f02 2727b66a847a 23b586b8ce6f 5381399361b4 6648f8d8ae97 3758790638f7 dec6fd34a50d 74f7a6bd90b2 6fb0ed437a69 20c8a382aed2 15e5321836bc 4532e4b1ad35 2422e61102ae 9b5215b8a25e e0b606a15c1c 5a6183a0d01c: (1.073965978s)
	I0128 11:08:07.343642   16796 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:08:07.382841   16796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:08:07.391785   16796 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 28 19:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 28 19:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jan 28 19:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 28 19:07 /etc/kubernetes/scheduler.conf
	
	I0128 11:08:07.391852   16796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:08:07.401079   16796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:08:07.439350   16796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:08:07.449324   16796 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:08:07.449387   16796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0128 11:08:07.457904   16796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:08:07.467577   16796 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:08:07.467646   16796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0128 11:08:07.476629   16796 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:08:07.485533   16796 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:08:07.485548   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:08:07.538897   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:08:07.962477   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:08:08.106218   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:08:08.170919   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:08:08.340199   16796 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:08:08.340297   16796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:08:08.854585   16796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:08:09.354717   16796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:08:09.368945   16796 api_server.go:71] duration metric: took 1.02875954s to wait for apiserver process to appear ...
	I0128 11:08:09.368962   16796 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:08:09.368971   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:11.415584   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0128 11:08:11.415598   16796 api_server.go:102] status: https://127.0.0.1:53081/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:08:11.915646   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:11.921109   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:08:11.921123   16796 api_server.go:102] status: https://127.0.0.1:53081/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:08:12.415769   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:12.421049   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:08:12.421065   16796 api_server.go:102] status: https://127.0.0.1:53081/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:08:12.915886   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:12.920907   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 200:
	ok
	I0128 11:08:12.928190   16796 api_server.go:140] control plane version: v1.26.1
	I0128 11:08:12.928209   16796 api_server.go:130] duration metric: took 3.559274931s to wait for apiserver health ...
	I0128 11:08:12.928217   16796 cni.go:84] Creating CNI manager for ""
	I0128 11:08:12.928227   16796 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:08:12.962290   16796 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0128 11:08:12.982706   16796 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0128 11:08:12.992538   16796 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0128 11:08:13.006328   16796 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:08:13.011936   16796 system_pods.go:59] 5 kube-system pods found
	I0128 11:08:13.011955   16796 system_pods.go:61] "etcd-kubernetes-upgrade-510000" [bcaf9f74-48f0-4b50-986e-9a0b8b7f875d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:08:13.011963   16796 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-510000" [4c86e68f-aa0b-49be-8c45-f944175355ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:08:13.011972   16796 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-510000" [e8ceb547-5f56-4dfc-b126-602c70d81ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0128 11:08:13.011979   16796 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-510000" [ebba30e0-90b2-4e9c-8939-457a92d58b92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:08:13.011984   16796 system_pods.go:61] "storage-provisioner" [ba350de1-6a66-4677-9a69-dfe343b644cd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0128 11:08:13.011988   16796 system_pods.go:74] duration metric: took 5.648606ms to wait for pod list to return data ...
	I0128 11:08:13.011993   16796 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:08:13.015418   16796 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0128 11:08:13.015434   16796 node_conditions.go:123] node cpu capacity is 6
	I0128 11:08:13.015443   16796 node_conditions.go:105] duration metric: took 3.445155ms to run NodePressure ...
	I0128 11:08:13.015458   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:08:13.159378   16796 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 11:08:13.167028   16796 ops.go:34] apiserver oom_adj: -16
	I0128 11:08:13.167036   16796 kubeadm.go:637] restartCluster took 10.090965666s
	I0128 11:08:13.167042   16796 kubeadm.go:403] StartCluster complete in 10.125663084s
	I0128 11:08:13.167052   16796 settings.go:142] acquiring lock: {Name:mkfe63daf2cbfdaa44c3edb51b8dcbfb26a764e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:08:13.167134   16796 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:08:13.167799   16796 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/kubeconfig: {Name:mk9285754a110019f97a480561fbfd0056cc86f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:08:13.168069   16796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 11:08:13.168085   16796 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0128 11:08:13.168140   16796 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-510000"
	I0128 11:08:13.168156   16796 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-510000"
	W0128 11:08:13.168162   16796 addons.go:236] addon storage-provisioner should already be in state true
	I0128 11:08:13.168187   16796 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-510000"
	I0128 11:08:13.168204   16796 host.go:66] Checking if "kubernetes-upgrade-510000" exists ...
	I0128 11:08:13.168211   16796 config.go:180] Loaded profile config "kubernetes-upgrade-510000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:08:13.168214   16796 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-510000"
	I0128 11:08:13.168495   16796 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	I0128 11:08:13.168528   16796 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	I0128 11:08:13.168592   16796 kapi.go:59] client config for kubernetes-upgrade-510000: &rest.Config{Host:"https://127.0.0.1:53081", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 11:08:13.174365   16796 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-510000" context rescaled to 1 replicas
	I0128 11:08:13.174396   16796 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:08:13.197859   16796 out.go:177] * Verifying Kubernetes components...
	I0128 11:08:13.256216   16796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:08:13.268651   16796 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0128 11:08:13.271354   16796 kapi.go:59] client config for kubernetes-upgrade-510000: &rest.Config{Host:"https://127.0.0.1:53081", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubernetes-upgrade-510000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 11:08:13.273148   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:08:13.291850   16796 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 11:08:13.301585   16796 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-510000"
	W0128 11:08:13.312999   16796 addons.go:236] addon default-storageclass should already be in state true
	I0128 11:08:13.313027   16796 host.go:66] Checking if "kubernetes-upgrade-510000" exists ...
	I0128 11:08:13.313104   16796 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:08:13.313115   16796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 11:08:13.313185   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:08:13.313892   16796 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	I0128 11:08:13.360545   16796 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:08:13.360641   16796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:08:13.373296   16796 api_server.go:71] duration metric: took 198.870758ms to wait for apiserver process to appear ...
	I0128 11:08:13.373322   16796 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:08:13.373342   16796 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53081/healthz ...
	I0128 11:08:13.381329   16796 api_server.go:278] https://127.0.0.1:53081/healthz returned 200:
	ok
	I0128 11:08:13.383959   16796 api_server.go:140] control plane version: v1.26.1
	I0128 11:08:13.383973   16796 api_server.go:130] duration metric: took 10.645579ms to wait for apiserver health ...
	I0128 11:08:13.383979   16796 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:08:13.384198   16796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53082 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 11:08:13.384489   16796 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 11:08:13.384501   16796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 11:08:13.384584   16796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	I0128 11:08:13.389415   16796 system_pods.go:59] 5 kube-system pods found
	I0128 11:08:13.389446   16796 system_pods.go:61] "etcd-kubernetes-upgrade-510000" [bcaf9f74-48f0-4b50-986e-9a0b8b7f875d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:08:13.389455   16796 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-510000" [4c86e68f-aa0b-49be-8c45-f944175355ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:08:13.389468   16796 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-510000" [e8ceb547-5f56-4dfc-b126-602c70d81ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0128 11:08:13.389474   16796 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-510000" [ebba30e0-90b2-4e9c-8939-457a92d58b92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:08:13.389480   16796 system_pods.go:61] "storage-provisioner" [ba350de1-6a66-4677-9a69-dfe343b644cd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0128 11:08:13.389485   16796 system_pods.go:74] duration metric: took 5.502473ms to wait for pod list to return data ...
	I0128 11:08:13.389492   16796 kubeadm.go:578] duration metric: took 215.080004ms to wait for : map[apiserver:true system_pods:true] ...
	I0128 11:08:13.389503   16796 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:08:13.392834   16796 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0128 11:08:13.392854   16796 node_conditions.go:123] node cpu capacity is 6
	I0128 11:08:13.392871   16796 node_conditions.go:105] duration metric: took 3.360353ms to run NodePressure ...
	I0128 11:08:13.392884   16796 start.go:228] waiting for startup goroutines ...
	I0128 11:08:13.447170   16796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53082 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/kubernetes-upgrade-510000/id_rsa Username:docker}
	I0128 11:08:13.496687   16796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:08:13.555444   16796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 11:08:14.232899   16796 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0128 11:08:14.253832   16796 addons.go:492] enable addons completed in 1.085755492s: enabled=[storage-provisioner default-storageclass]
	I0128 11:08:14.253899   16796 start.go:233] waiting for cluster config update ...
	I0128 11:08:14.253926   16796 start.go:240] writing updated cluster config ...
	I0128 11:08:14.254844   16796 ssh_runner.go:195] Run: rm -f paused
	I0128 11:08:14.295844   16796 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0128 11:08:14.317938   16796 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-510000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 19:03:18 UTC, end at Sat 2023-01-28 19:08:15 UTC. --
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.546886909Z" level=info msg="Starting up"
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.548547017Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.548564867Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.548580162Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.548588220Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.549915032Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.549956345Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.549975659Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.549985925Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.558574379Z" level=info msg="Loading containers: start."
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.695438598Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.734805924Z" level=info msg="Loading containers: done."
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.751052303Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.751129404Z" level=info msg="Daemon has completed initialization"
	Jan 28 19:08:00 kubernetes-upgrade-510000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.777959988Z" level=info msg="API listen on [::]:2376"
	Jan 28 19:08:00 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:00.785659350Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 28 19:08:06 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:06.366872082Z" level=info msg="ignoring event" container=dec6fd34a50dbd14c31f0c7ca951c732915b222c665b55afff22774fb5765e8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:08:06 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:06.366992689Z" level=info msg="ignoring event" container=6648f8d8ae97e6d70c1141041555fe393af3f2caced00d124b1fbc373518b630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:08:06 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:06.377383327Z" level=info msg="ignoring event" container=2727b66a847aec795196bb169759490eef160ba3d0af736a87bfff3e3de2e2a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:08:06 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:06.377513348Z" level=info msg="ignoring event" container=172917e17f02ca3fd641fecf548e7cac32cdadae839f5acf6ed1d78f3afaa715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:08:06 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:06.377553057Z" level=info msg="ignoring event" container=5381399361b4986a929fee51c058052e815ccee3b57836bf674c0117506d46a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:08:06 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:06.442893471Z" level=info msg="ignoring event" container=3758790638f7005be4d5599014ce8c15c905a7660002fe0ea3506616921ef7f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:08:06 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:06.462954485Z" level=info msg="ignoring event" container=23b586b8ce6fe29375969d62714b5fc08f7175c87ff338dedc8c24d4a700991e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:08:07 kubernetes-upgrade-510000 dockerd[12135]: time="2023-01-28T19:08:07.265935376Z" level=info msg="ignoring event" container=2d800d7f32b342bf67e928840dc97689be1b8abe64bf28824c2a7757212b441a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	d48c6c6d54e02       655493523f607       7 seconds ago       Running             kube-scheduler            2                   414ac34bef2fb
	30a63ac293842       e9c08e11b07f6       7 seconds ago       Running             kube-controller-manager   2                   626781ff716cb
	2ad0697d35998       deb04688c4a35       7 seconds ago       Running             kube-apiserver            2                   5f0b165b3bf7c
	9ac7543eb7a48       fce326961ae2d       7 seconds ago       Running             etcd                      2                   b0ef12b2fef9c
	2d800d7f32b34       deb04688c4a35       14 seconds ago      Exited              kube-apiserver            1                   dec6fd34a50db
	172917e17f02c       655493523f607       14 seconds ago      Exited              kube-scheduler            1                   3758790638f70
	2727b66a847ae       e9c08e11b07f6       14 seconds ago      Exited              kube-controller-manager   1                   6648f8d8ae97e
	23b586b8ce6fe       fce326961ae2d       14 seconds ago      Exited              etcd                      1                   5381399361b49
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-510000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-510000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7a59349a2d83a39298292bdec73f3c39ac1090
	                    minikube.k8s.io/name=kubernetes-upgrade-510000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_28T11_07_52_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 19:07:49 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-510000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 19:08:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 19:08:11 +0000   Sat, 28 Jan 2023 19:07:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 19:08:11 +0000   Sat, 28 Jan 2023 19:07:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 19:08:11 +0000   Sat, 28 Jan 2023 19:07:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 19:08:11 +0000   Sat, 28 Jan 2023 19:07:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-510000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1a46cb41c9d45969ef9bdf4a48d9b28
	  System UUID:                f1a46cb41c9d45969ef9bdf4a48d9b28
	  Boot ID:                    ee99b2f3-f371-4644-9f1b-3a130b11e40b
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-510000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         24s
	  kube-system                 kube-apiserver-kubernetes-upgrade-510000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-510000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-scheduler-kubernetes-upgrade-510000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 24s              kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  24s              kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s              kubelet  Node kubernetes-upgrade-510000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s              kubelet  Node kubernetes-upgrade-510000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s              kubelet  Node kubernetes-upgrade-510000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20s              kubelet  Node kubernetes-upgrade-510000 status is now: NodeReady
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node kubernetes-upgrade-510000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet  Node kubernetes-upgrade-510000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node kubernetes-upgrade-510000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000048] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000074] FS-Cache: N-cookie d=000000001f185d94{9p.inode} n=000000006f289b09
	[  +0.000067] FS-Cache: N-key=[8] '000fdf0500000000'
	[  +0.002750] FS-Cache: Duplicate cookie detected
	[  +0.000047] FS-Cache: O-cookie c=00000006 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000057] FS-Cache: O-cookie d=000000001f185d94{9p.inode} n=0000000024714d0d
	[  +0.000065] FS-Cache: O-key=[8] '000fdf0500000000'
	[  +0.000040] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000086] FS-Cache: N-cookie d=000000001f185d94{9p.inode} n=0000000020ede7a1
	[  +0.000068] FS-Cache: N-key=[8] '000fdf0500000000'
	[  +3.153159] FS-Cache: Duplicate cookie detected
	[  +0.000064] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000042] FS-Cache: O-cookie d=000000001f185d94{9p.inode} n=000000003a5757c7
	[  +0.000151] FS-Cache: O-key=[8] 'ff0edf0500000000'
	[  +0.000060] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000057] FS-Cache: N-cookie d=000000001f185d94{9p.inode} n=00000000fff8394e
	[  +0.000053] FS-Cache: N-key=[8] 'ff0edf0500000000'
	[  +0.802988] FS-Cache: Duplicate cookie detected
	[  +0.000038] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000062] FS-Cache: O-cookie d=000000001f185d94{9p.inode} n=00000000e1ea6f81
	[  +0.000039] FS-Cache: O-key=[8] '1c0fdf0500000000'
	[  +0.000036] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000059] FS-Cache: N-cookie d=000000001f185d94{9p.inode} n=0000000007fb830a
	[  +0.000050] FS-Cache: N-key=[8] '1c0fdf0500000000'
	[Jan28 18:55] hrtimer: interrupt took 1291156 ns
	
	* 
	* ==> etcd [23b586b8ce6f] <==
	* {"level":"info","ts":"2023-01-28T19:08:01.655Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-28T19:08:01.656Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-28T19:08:01.656Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-28T19:08:01.656Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:08:01.656Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:08:03.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-28T19:08:03.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-28T19:08:03.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-01-28T19:08:03.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-01-28T19:08:03.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T19:08:03.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-01-28T19:08:03.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T19:08:03.379Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-510000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T19:08:03.379Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T19:08:03.379Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T19:08:03.379Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T19:08:03.379Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-28T19:08:03.380Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T19:08:03.380Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-28T19:08:06.345Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-28T19:08:06.345Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-510000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-01-28T19:08:06.372Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-01-28T19:08:06.374Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:08:06.375Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:08:06.375Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-510000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [9ac7543eb7a4] <==
	* {"level":"info","ts":"2023-01-28T19:08:09.150Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-01-28T19:08:09.150Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-01-28T19:08:09.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-01-28T19:08:09.150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-01-28T19:08:09.150Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T19:08:09.150Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T19:08:09.153Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-28T19:08:09.153Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-28T19:08:09.153Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-28T19:08:09.153Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:08:09.153Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:08:10.281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-28T19:08:10.281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-28T19:08:10.281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T19:08:10.281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-01-28T19:08:10.281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-28T19:08:10.281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-01-28T19:08:10.281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-28T19:08:10.283Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-510000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T19:08:10.283Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T19:08:10.284Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T19:08:10.284Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T19:08:10.285Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-28T19:08:10.286Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T19:08:10.286Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:08:16 up  1:07,  0 users,  load average: 3.40, 1.80, 1.46
	Linux kubernetes-upgrade-510000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [2ad0697d3599] <==
	* I0128 19:08:11.413110       1 controller.go:85] Starting OpenAPI V3 controller
	I0128 19:08:11.413121       1 naming_controller.go:291] Starting NamingConditionController
	I0128 19:08:11.413135       1 establishing_controller.go:76] Starting EstablishingController
	I0128 19:08:11.413142       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0128 19:08:11.413164       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0128 19:08:11.413171       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0128 19:08:11.417251       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0128 19:08:11.417263       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0128 19:08:11.431015       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0128 19:08:11.450976       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0128 19:08:11.511193       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0128 19:08:11.511269       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0128 19:08:11.511382       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0128 19:08:11.511409       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0128 19:08:11.511608       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0128 19:08:11.512294       1 shared_informer.go:280] Caches are synced for configmaps
	I0128 19:08:11.512556       1 cache.go:39] Caches are synced for autoregister controller
	I0128 19:08:11.517423       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0128 19:08:12.227707       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0128 19:08:12.414165       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0128 19:08:13.091591       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0128 19:08:13.099362       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0128 19:08:13.118248       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0128 19:08:13.147236       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0128 19:08:13.152074       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [2d800d7f32b3] <==
	* E0128 19:08:06.360283       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0128 19:08:06.361253       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0128 19:08:06.361388       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0128 19:08:06.361451       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0128 19:08:06.361466       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0128 19:08:06.361543       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0128 19:08:06.361570       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0128 19:08:06.361627       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0128 19:08:06.361645       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0128 19:08:06.361709       1 watcher.go:219] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0128 19:08:06.361711       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [2727b66a847a] <==
	* I0128 19:08:02.592706       1 serving.go:348] Generated self-signed cert in-memory
	I0128 19:08:02.876020       1 controllermanager.go:182] Version: v1.26.1
	I0128 19:08:02.876073       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 19:08:02.877122       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0128 19:08:02.877155       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 19:08:02.877120       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0128 19:08:02.877138       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [30a63ac29384] <==
	* I0128 19:08:13.474614       1 replica_set.go:201] Starting replicationcontroller controller
	I0128 19:08:13.474650       1 shared_informer.go:273] Waiting for caches to sync for ReplicationController
	I0128 19:08:13.476934       1 controllermanager.go:622] Started "serviceaccount"
	I0128 19:08:13.477116       1 serviceaccounts_controller.go:111] Starting service account controller
	I0128 19:08:13.477251       1 shared_informer.go:273] Waiting for caches to sync for service account
	I0128 19:08:13.486028       1 controllermanager.go:622] Started "horizontalpodautoscaling"
	I0128 19:08:13.486211       1 horizontal.go:181] Starting HPA controller
	I0128 19:08:13.486217       1 shared_informer.go:273] Waiting for caches to sync for HPA
	I0128 19:08:13.489966       1 controllermanager.go:622] Started "pv-protection"
	I0128 19:08:13.490114       1 pv_protection_controller.go:75] Starting PV protection controller
	I0128 19:08:13.490145       1 shared_informer.go:273] Waiting for caches to sync for PV protection
	I0128 19:08:13.492769       1 controllermanager.go:622] Started "ephemeral-volume"
	I0128 19:08:13.492939       1 controller.go:169] Starting ephemeral volume controller
	I0128 19:08:13.492951       1 shared_informer.go:273] Waiting for caches to sync for ephemeral
	I0128 19:08:13.501350       1 controllermanager.go:622] Started "garbagecollector"
	I0128 19:08:13.501628       1 garbagecollector.go:154] Starting garbage collector controller
	I0128 19:08:13.501649       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0128 19:08:13.501869       1 graph_builder.go:291] GraphBuilder running
	I0128 19:08:13.504214       1 controllermanager.go:622] Started "daemonset"
	I0128 19:08:13.504388       1 daemon_controller.go:265] Starting daemon sets controller
	I0128 19:08:13.504396       1 shared_informer.go:273] Waiting for caches to sync for daemon sets
	I0128 19:08:13.506880       1 controllermanager.go:622] Started "statefulset"
	I0128 19:08:13.506965       1 stateful_set.go:152] Starting stateful set controller
	I0128 19:08:13.506974       1 shared_informer.go:273] Waiting for caches to sync for stateful set
	I0128 19:08:13.560840       1 shared_informer.go:280] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [172917e17f02] <==
	* I0128 19:08:02.193101       1 serving.go:348] Generated self-signed cert in-memory
	W0128 19:08:04.674709       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0128 19:08:04.674732       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0128 19:08:04.674740       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0128 19:08:04.674747       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0128 19:08:04.686348       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0128 19:08:04.686425       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 19:08:04.687452       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0128 19:08:04.687575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0128 19:08:04.687618       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 19:08:04.687639       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 19:08:04.788613       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 19:08:06.305003       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0128 19:08:06.305261       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0128 19:08:06.305268       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0128 19:08:06.305436       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [d48c6c6d54e0] <==
	* I0128 19:08:09.738432       1 serving.go:348] Generated self-signed cert in-memory
	W0128 19:08:11.432267       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0128 19:08:11.432309       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0128 19:08:11.432318       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0128 19:08:11.432323       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0128 19:08:11.448501       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0128 19:08:11.448793       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 19:08:11.450017       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0128 19:08:11.450166       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0128 19:08:11.450203       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 19:08:11.450217       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 19:08:11.551423       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 19:03:18 UTC, end at Sat 2023-01-28 19:08:17 UTC. --
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550298   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5400a1f55751dffde1e15d3f4aadd33-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-510000\" (UID: \"d5400a1f55751dffde1e15d3f4aadd33\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550313   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5400a1f55751dffde1e15d3f4aadd33-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-510000\" (UID: \"d5400a1f55751dffde1e15d3f4aadd33\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550388   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/843a1a65fa9f24ebdc1229dad8d8a7f0-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-510000\" (UID: \"843a1a65fa9f24ebdc1229dad8d8a7f0\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550550   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/cfff141d75396f3c21e0fa47e14ade0e-etcd-certs\") pod \"etcd-kubernetes-upgrade-510000\" (UID: \"cfff141d75396f3c21e0fa47e14ade0e\") " pod="kube-system/etcd-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550702   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/041548233aa78fa5af616d0876fa92b6-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-510000\" (UID: \"041548233aa78fa5af616d0876fa92b6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550728   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5400a1f55751dffde1e15d3f4aadd33-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-510000\" (UID: \"d5400a1f55751dffde1e15d3f4aadd33\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550748   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5400a1f55751dffde1e15d3f4aadd33-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-510000\" (UID: \"d5400a1f55751dffde1e15d3f4aadd33\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550764   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/041548233aa78fa5af616d0876fa92b6-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-510000\" (UID: \"041548233aa78fa5af616d0876fa92b6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.550781   13552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5400a1f55751dffde1e15d3f4aadd33-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-510000\" (UID: \"d5400a1f55751dffde1e15d3f4aadd33\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.568783   13552 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: E0128 19:08:08.569323   13552 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-510000"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.789961   13552 scope.go:115] "RemoveContainer" containerID="23b586b8ce6fe29375969d62714b5fc08f7175c87ff338dedc8c24d4a700991e"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.797599   13552 scope.go:115] "RemoveContainer" containerID="2d800d7f32b342bf67e928840dc97689be1b8abe64bf28824c2a7757212b441a"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.805003   13552 scope.go:115] "RemoveContainer" containerID="2727b66a847aec795196bb169759490eef160ba3d0af736a87bfff3e3de2e2a8"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.812437   13552 scope.go:115] "RemoveContainer" containerID="172917e17f02ca3fd641fecf548e7cac32cdadae839f5acf6ed1d78f3afaa715"
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: E0128 19:08:08.851118   13552 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-510000?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 19:08:08 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:08.988515   13552 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-510000"
	Jan 28 19:08:09 kubernetes-upgrade-510000 kubelet[13552]: E0128 19:08:09.038009   13552 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-510000"
	Jan 28 19:08:09 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:09.847293   13552 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-510000"
	Jan 28 19:08:11 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:11.527011   13552 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-510000"
	Jan 28 19:08:11 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:11.527111   13552 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-510000"
	Jan 28 19:08:11 kubernetes-upgrade-510000 kubelet[13552]: E0128 19:08:11.567809   13552 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-510000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-510000"
	Jan 28 19:08:12 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:12.239823   13552 apiserver.go:52] "Watching apiserver"
	Jan 28 19:08:12 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:12.249349   13552 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 28 19:08:12 kubernetes-upgrade-510000 kubelet[13552]: I0128 19:08:12.275966   13552 reconciler.go:41] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-510000 -n kubernetes-upgrade-510000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-510000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-510000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-510000 describe pod storage-provisioner: exit status 1 (53.108454ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-510000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-510000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-510000

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-510000: (2.797506944s)
--- FAIL: TestKubernetesUpgrade (557.24s)

                                                
                                    
x
+
TestMissingContainerUpgrade (51.9s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1016897626.exe start -p missing-upgrade-472000 --memory=2200 --driver=docker 
E0128 10:58:47.291477    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1016897626.exe start -p missing-upgrade-472000 --memory=2200 --driver=docker : exit status 78 (37.652089057s)

                                                
                                                
-- stdout --
	* [missing-upgrade-472000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-472000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-472000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 39.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 86.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 142.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 243.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 511.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:58:28.688171957 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-472000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:58:48.176171771 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1016897626.exe start -p missing-upgrade-472000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1016897626.exe start -p missing-upgrade-472000 --memory=2200 --driver=docker : exit status 70 (4.16548938s)

                                                
                                                
-- stdout --
	* [missing-upgrade-472000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-472000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-472000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1016897626.exe start -p missing-upgrade-472000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1016897626.exe start -p missing-upgrade-472000 --memory=2200 --driver=docker : exit status 70 (3.990732872s)

                                                
                                                
-- stdout --
	* [missing-upgrade-472000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-472000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-472000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-01-28 10:59:01.126986 -0800 PST m=+2279.085822026
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-472000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "89f902d88cd3494645cce07b886fef450fef1650cddbebbaadc74deca6ecae55",
	        "Created": "2023-01-28T18:58:36.87389635Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 175712,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:58:37.099814957Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/89f902d88cd3494645cce07b886fef450fef1650cddbebbaadc74deca6ecae55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/89f902d88cd3494645cce07b886fef450fef1650cddbebbaadc74deca6ecae55/hostname",
	        "HostsPath": "/var/lib/docker/containers/89f902d88cd3494645cce07b886fef450fef1650cddbebbaadc74deca6ecae55/hosts",
	        "LogPath": "/var/lib/docker/containers/89f902d88cd3494645cce07b886fef450fef1650cddbebbaadc74deca6ecae55/89f902d88cd3494645cce07b886fef450fef1650cddbebbaadc74deca6ecae55-json.log",
	        "Name": "/missing-upgrade-472000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-472000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b9eb5e9bfce99f66e6ea0d6ac28d3084bdf8f7c7c284de69c3d4714abbcc3b95-init/diff:/var/lib/docker/overlay2/3a5eb793706dab6a00e3a6337ab8693407ba67ebe159e3a8c2f7a8c0b3340a1f/diff:/var/lib/docker/overlay2/846e0f8f0eea05ce251d0675fc0f7ec6773eaad6dcf2c80a006b06096e91d410/diff:/var/lib/docker/overlay2/9e3cce176eddaf6a58a3d25d8c2736a360dbf4a7f076f8e7c16807ad98e94eec/diff:/var/lib/docker/overlay2/44e6e4a48f2d20d013c13091f07263420d5b4dd98196f93e0773eefc75b8a387/diff:/var/lib/docker/overlay2/92b81764a5a76b852fb8fb3878770999095fda715fb6e430bb2f579507afc632/diff:/var/lib/docker/overlay2/198f800f261adea911ce679a354dbaa9cb62084a71d35918f83611227e44694f/diff:/var/lib/docker/overlay2/783a607a8dc5e07072214da7acc2c6be4c0640502cf72f9a030c5fe065c878d3/diff:/var/lib/docker/overlay2/0d52374ae2c42b9bd2a2aacdb1a3deee761e5ec3d448c06f57de44c308d2793c/diff:/var/lib/docker/overlay2/ab2f10b83aa92e554730a54decc55facffdde82f1ec075d8445adff8b6063de1/diff:/var/lib/docker/overlay2/39f444
4c02e5400a72216b45baa67a66bad9bceb554a579912cc202f17ea8b01/diff:/var/lib/docker/overlay2/5543e7f0f154691a204e607d13c5f862cc3f177dc9a3bc50027ddb6dc5712041/diff:/var/lib/docker/overlay2/afa6ceca0e1983b444bae85682aa4d21531feae3761ee2832679dffbe6ad6acc/diff:/var/lib/docker/overlay2/b5038bb2502f40b48d26d2580fa219f544c6c2768992099b6ab6ef05f93cc05b/diff:/var/lib/docker/overlay2/9b8375a1f55e0d49ada7c6f60d00981de88ae6d71c60d0eb949caf6f1ca98cea/diff:/var/lib/docker/overlay2/21d9f07453ff723a425280089cb459a9c97667f97c5df73916f537833e25360d/diff:/var/lib/docker/overlay2/9b4d5fbdf578ccc75369a75f362f3e38d366badfc69db2069cdec7eee6ebbf26/diff:/var/lib/docker/overlay2/c8db01a6ee6933f0aef59444bd6932612e2cf91965c41d576d1a14bc4c5e0da5/diff:/var/lib/docker/overlay2/fb26580dd02020f332cc077879db60b14a96f2e84768b8715cb9f9af59cc725c/diff:/var/lib/docker/overlay2/b9a63932903cc05817e33921a96e8d52c020a641232546dafcd1c125006d2b64/diff:/var/lib/docker/overlay2/222f3b62658e54bcc1f4e86007bb8e6f6cdcd16279bde733a17effc95a7b24b1/diff:/var/lib/d
ocker/overlay2/286d8f56d4871fa6dfdcc1be4a016db8b231a1cdd1e9bf81d02c1957ed6c21fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9eb5e9bfce99f66e6ea0d6ac28d3084bdf8f7c7c284de69c3d4714abbcc3b95/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9eb5e9bfce99f66e6ea0d6ac28d3084bdf8f7c7c284de69c3d4714abbcc3b95/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9eb5e9bfce99f66e6ea0d6ac28d3084bdf8f7c7c284de69c3d4714abbcc3b95/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-472000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-472000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-472000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-472000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-472000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "617a760938e7163728d563a96e37618d82510c3adf2c60d0e741a0d1c20bdf77",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52805"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52807"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/617a760938e7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "e79282f953cf2f9d9535ef43d8f4d5951a002c9c7ae6fcf8f6ef83e507fedd27",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "9bd710a2e2e93a89bc1ff3f9c3069eadb2765518501e82198f38305f5684cab6",
	                    "EndpointID": "e79282f953cf2f9d9535ef43d8f4d5951a002c9c7ae6fcf8f6ef83e507fedd27",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-472000 -n missing-upgrade-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-472000 -n missing-upgrade-472000: exit status 6 (391.481219ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:59:01.565153   14272 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-472000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-472000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-472000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-472000: (2.328743825s)
--- FAIL: TestMissingContainerUpgrade (51.90s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.79s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2726829399/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2726829399/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2726829399/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2726829399/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
driver_install_or_update_test.go:218: invalid driver version. expected: testing, got: v1.29.0
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2023886200.exe start -p stopped-upgrade-118000 --memory=2200 --vm-driver=docker 
E0128 11:00:38.524008    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2023886200.exe start -p stopped-upgrade-118000 --memory=2200 --vm-driver=docker : exit status 70 (38.688329333s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-118000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig4046637781
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:00:45.926783792 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-118000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:01:05.571833891 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-118000", then "minikube start -p stopped-upgrade-118000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 161.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 422.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 508.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:01:05.571833891 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2023886200.exe start -p stopped-upgrade-118000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2023886200.exe start -p stopped-upgrade-118000 --memory=2200 --vm-driver=docker : exit status 70 (4.341025358s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-118000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig2087972082
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-118000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2023886200.exe start -p stopped-upgrade-118000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2023886200.exe start -p stopped-upgrade-118000 --memory=2200 --vm-driver=docker : exit status 70 (4.827334147s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-118000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig4115541040
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-118000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (50.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (251.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-867000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0128 11:14:04.171334    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 11:14:16.356699    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:14:16.587680    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-867000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m11.145332396s)

                                                
                                                
-- stdout --
	* [old-k8s-version-867000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-867000 in cluster old-k8s-version-867000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 11:13:53.156240   20834 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:13:53.156508   20834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:13:53.156513   20834 out.go:309] Setting ErrFile to fd 2...
	I0128 11:13:53.156517   20834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:13:53.156627   20834 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 11:13:53.157211   20834 out.go:303] Setting JSON to false
	I0128 11:13:53.176401   20834 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4408,"bootTime":1674928825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 11:13:53.176511   20834 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:13:53.199232   20834 out.go:177] * [old-k8s-version-867000] minikube v1.29.0 on Darwin 13.2
	I0128 11:13:53.241653   20834 notify.go:220] Checking for updates...
	I0128 11:13:53.263861   20834 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:13:53.285772   20834 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:13:53.307801   20834 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:13:53.336014   20834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:13:53.356354   20834 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 11:13:53.398416   20834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:13:53.420200   20834 config.go:180] Loaded profile config "kubenet-360000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:13:53.420318   20834 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:13:53.481788   20834 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:13:53.481948   20834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:13:53.627027   20834 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:13:53.538397088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:13:53.648644   20834 out.go:177] * Using the docker driver based on user configuration
	I0128 11:13:53.669418   20834 start.go:296] selected driver: docker
	I0128 11:13:53.669447   20834 start.go:857] validating driver "docker" against <nil>
	I0128 11:13:53.669469   20834 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:13:53.673356   20834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:13:53.821034   20834 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:13:53.729819194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:13:53.821139   20834 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 11:13:53.821343   20834 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 11:13:53.842619   20834 out.go:177] * Using Docker Desktop driver with root privileges
	I0128 11:13:53.863363   20834 cni.go:84] Creating CNI manager for ""
	I0128 11:13:53.863378   20834 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:13:53.863389   20834 start_flags.go:319] config:
	{Name:old-k8s-version-867000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-867000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:13:53.905495   20834 out.go:177] * Starting control plane node old-k8s-version-867000 in cluster old-k8s-version-867000
	I0128 11:13:53.926391   20834 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:13:53.947503   20834 out.go:177] * Pulling base image ...
	I0128 11:13:53.989533   20834 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:13:53.989569   20834 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:13:53.989606   20834 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 11:13:53.989620   20834 cache.go:57] Caching tarball of preloaded images
	I0128 11:13:53.989746   20834 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:13:53.989757   20834 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 11:13:53.990293   20834 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/config.json ...
	I0128 11:13:53.990361   20834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/config.json: {Name:mk0cbf33f8de2c9f57b7efd946066a0212a1a31c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:13:54.050487   20834 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:13:54.050504   20834 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:13:54.050523   20834 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:13:54.050597   20834 start.go:364] acquiring machines lock for old-k8s-version-867000: {Name:mk6bff3692844ef15630a267932d689c213153ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:13:54.050759   20834 start.go:368] acquired machines lock for "old-k8s-version-867000" in 149.212µs
	I0128 11:13:54.050793   20834 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-867000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-867000 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:13:54.050875   20834 start.go:125] createHost starting for "" (driver="docker")
	I0128 11:13:54.073952   20834 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0128 11:13:54.074233   20834 start.go:159] libmachine.API.Create for "old-k8s-version-867000" (driver="docker")
	I0128 11:13:54.074263   20834 client.go:168] LocalClient.Create starting
	I0128 11:13:54.074385   20834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem
	I0128 11:13:54.074432   20834 main.go:141] libmachine: Decoding PEM data...
	I0128 11:13:54.074451   20834 main.go:141] libmachine: Parsing certificate...
	I0128 11:13:54.074516   20834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem
	I0128 11:13:54.074558   20834 main.go:141] libmachine: Decoding PEM data...
	I0128 11:13:54.074568   20834 main.go:141] libmachine: Parsing certificate...
	I0128 11:13:54.075020   20834 cli_runner.go:164] Run: docker network inspect old-k8s-version-867000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 11:13:54.134343   20834 cli_runner.go:211] docker network inspect old-k8s-version-867000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 11:13:54.134461   20834 network_create.go:281] running [docker network inspect old-k8s-version-867000] to gather additional debugging logs...
	I0128 11:13:54.134475   20834 cli_runner.go:164] Run: docker network inspect old-k8s-version-867000
	W0128 11:13:54.193899   20834 cli_runner.go:211] docker network inspect old-k8s-version-867000 returned with exit code 1
	I0128 11:13:54.193930   20834 network_create.go:284] error running [docker network inspect old-k8s-version-867000]: docker network inspect old-k8s-version-867000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-867000
	I0128 11:13:54.193943   20834 network_create.go:286] output of [docker network inspect old-k8s-version-867000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-867000
	
	** /stderr **
	I0128 11:13:54.194034   20834 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 11:13:54.254862   20834 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:13:54.256297   20834 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:13:54.257630   20834 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:13:54.257977   20834 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00120a100}
	I0128 11:13:54.257988   20834 network_create.go:123] attempt to create docker network old-k8s-version-867000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0128 11:13:54.258056   20834 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-867000 old-k8s-version-867000
	I0128 11:13:54.354570   20834 network_create.go:107] docker network old-k8s-version-867000 192.168.76.0/24 created
	I0128 11:13:54.354602   20834 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-867000" container
	I0128 11:13:54.354732   20834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 11:13:54.416783   20834 cli_runner.go:164] Run: docker volume create old-k8s-version-867000 --label name.minikube.sigs.k8s.io=old-k8s-version-867000 --label created_by.minikube.sigs.k8s.io=true
	I0128 11:13:54.475098   20834 oci.go:103] Successfully created a docker volume old-k8s-version-867000
	I0128 11:13:54.475234   20834 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-867000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-867000 --entrypoint /usr/bin/test -v old-k8s-version-867000:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 11:13:54.910288   20834 oci.go:107] Successfully prepared a docker volume old-k8s-version-867000
	I0128 11:13:54.910322   20834 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:13:54.910354   20834 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 11:13:54.910456   20834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-867000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 11:14:00.721462   20834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-867000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (5.811000944s)
	I0128 11:14:00.721483   20834 kic.go:199] duration metric: took 5.811199 seconds to extract preloaded images to volume
	I0128 11:14:00.721603   20834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 11:14:00.876572   20834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-867000 --name old-k8s-version-867000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-867000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-867000 --network old-k8s-version-867000 --ip 192.168.76.2 --volume old-k8s-version-867000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 11:14:01.272739   20834 cli_runner.go:164] Run: docker container inspect old-k8s-version-867000 --format={{.State.Running}}
	I0128 11:14:01.339510   20834 cli_runner.go:164] Run: docker container inspect old-k8s-version-867000 --format={{.State.Status}}
	I0128 11:14:01.405818   20834 cli_runner.go:164] Run: docker exec old-k8s-version-867000 stat /var/lib/dpkg/alternatives/iptables
	I0128 11:14:01.529813   20834 oci.go:144] the created container "old-k8s-version-867000" has a running status.
	I0128 11:14:01.529847   20834 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa...
	I0128 11:14:01.590519   20834 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 11:14:01.714767   20834 cli_runner.go:164] Run: docker container inspect old-k8s-version-867000 --format={{.State.Status}}
	I0128 11:14:01.787479   20834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 11:14:01.787502   20834 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-867000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 11:14:01.915379   20834 cli_runner.go:164] Run: docker container inspect old-k8s-version-867000 --format={{.State.Status}}
	I0128 11:14:01.978168   20834 machine.go:88] provisioning docker machine ...
	I0128 11:14:01.978218   20834 ubuntu.go:169] provisioning hostname "old-k8s-version-867000"
	I0128 11:14:01.978306   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:02.038099   20834 main.go:141] libmachine: Using SSH client type: native
	I0128 11:14:02.038368   20834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55052 <nil> <nil>}
	I0128 11:14:02.038383   20834 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-867000 && echo "old-k8s-version-867000" | sudo tee /etc/hostname
	I0128 11:14:02.183215   20834 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-867000
	
	I0128 11:14:02.183310   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:02.243802   20834 main.go:141] libmachine: Using SSH client type: native
	I0128 11:14:02.243956   20834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55052 <nil> <nil>}
	I0128 11:14:02.243970   20834 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-867000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-867000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-867000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:14:02.376699   20834 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:14:02.376721   20834 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2556/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2556/.minikube}
	I0128 11:14:02.376740   20834 ubuntu.go:177] setting up certificates
	I0128 11:14:02.376747   20834 provision.go:83] configureAuth start
	I0128 11:14:02.376820   20834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-867000
	I0128 11:14:02.435951   20834 provision.go:138] copyHostCerts
	I0128 11:14:02.436046   20834 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem, removing ...
	I0128 11:14:02.436055   20834 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 11:14:02.436170   20834 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem (1082 bytes)
	I0128 11:14:02.436360   20834 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem, removing ...
	I0128 11:14:02.436366   20834 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 11:14:02.436431   20834 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem (1123 bytes)
	I0128 11:14:02.436570   20834 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem, removing ...
	I0128 11:14:02.436576   20834 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 11:14:02.436637   20834 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem (1679 bytes)
	I0128 11:14:02.436747   20834 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-867000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-867000]
	I0128 11:14:02.584909   20834 provision.go:172] copyRemoteCerts
	I0128 11:14:02.584969   20834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:14:02.585019   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:02.648291   20834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55052 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:14:02.746763   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:14:02.765511   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0128 11:14:02.783947   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 11:14:02.802947   20834 provision.go:86] duration metric: configureAuth took 426.19243ms
	I0128 11:14:02.802960   20834 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:14:02.803101   20834 config.go:180] Loaded profile config "old-k8s-version-867000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 11:14:02.803160   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:02.864502   20834 main.go:141] libmachine: Using SSH client type: native
	I0128 11:14:02.864691   20834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55052 <nil> <nil>}
	I0128 11:14:02.864708   20834 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:14:02.999595   20834 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:14:02.999615   20834 ubuntu.go:71] root file system type: overlay
	I0128 11:14:02.999767   20834 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:14:02.999854   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:03.065015   20834 main.go:141] libmachine: Using SSH client type: native
	I0128 11:14:03.065181   20834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55052 <nil> <nil>}
	I0128 11:14:03.065236   20834 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:14:03.210456   20834 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:14:03.210565   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:03.274129   20834 main.go:141] libmachine: Using SSH client type: native
	I0128 11:14:03.274330   20834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55052 <nil> <nil>}
	I0128 11:14:03.274345   20834 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:14:03.899408   20834 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:14:03.207679914 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 11:14:03.899429   20834 machine.go:91] provisioned docker machine in 1.921251975s
	I0128 11:14:03.899435   20834 client.go:171] LocalClient.Create took 9.825261606s
	I0128 11:14:03.899455   20834 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-867000" took 9.82531427s
	I0128 11:14:03.899463   20834 start.go:300] post-start starting for "old-k8s-version-867000" (driver="docker")
	I0128 11:14:03.899476   20834 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:14:03.899580   20834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:14:03.899660   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:03.962847   20834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55052 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:14:04.061713   20834 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:14:04.065856   20834 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:14:04.065873   20834 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:14:04.065882   20834 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:14:04.065887   20834 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:14:04.065900   20834 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/addons for local assets ...
	I0128 11:14:04.066007   20834 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/files for local assets ...
	I0128 11:14:04.066185   20834 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> 38492.pem in /etc/ssl/certs
	I0128 11:14:04.066411   20834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:14:04.073990   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:14:04.092194   20834 start.go:303] post-start completed in 192.717674ms
	I0128 11:14:04.092765   20834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-867000
	I0128 11:14:04.150482   20834 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/config.json ...
	I0128 11:14:04.150980   20834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:14:04.151058   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:04.211538   20834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55052 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:14:04.302470   20834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:14:04.307608   20834 start.go:128] duration metric: createHost completed in 10.256822625s
	I0128 11:14:04.307624   20834 start.go:83] releasing machines lock for "old-k8s-version-867000", held for 10.256953503s
	I0128 11:14:04.307713   20834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-867000
	I0128 11:14:04.367007   20834 ssh_runner.go:195] Run: cat /version.json
	I0128 11:14:04.367007   20834 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0128 11:14:04.367082   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:04.367136   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:04.433384   20834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55052 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:14:04.433378   20834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55052 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:14:04.739360   20834 ssh_runner.go:195] Run: systemctl --version
	I0128 11:14:04.744868   20834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:14:04.750604   20834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:14:04.774533   20834 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:14:04.774614   20834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 11:14:04.790405   20834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 11:14:04.799629   20834 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 11:14:04.799647   20834 start.go:483] detecting cgroup driver to use...
	I0128 11:14:04.799661   20834 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:14:04.799794   20834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:14:04.815026   20834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0128 11:14:04.825253   20834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:14:04.834577   20834 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:14:04.834648   20834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:14:04.849408   20834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:14:04.863985   20834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:14:04.875188   20834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:14:04.885612   20834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:14:04.896077   20834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:14:04.905181   20834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:14:04.913644   20834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:14:04.920835   20834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:14:05.012624   20834 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:14:05.094583   20834 start.go:483] detecting cgroup driver to use...
	I0128 11:14:05.094603   20834 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:14:05.094672   20834 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:14:05.105600   20834 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:14:05.105665   20834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:14:05.117256   20834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:14:05.132180   20834 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:14:05.213255   20834 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:14:05.283940   20834 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:14:05.283956   20834 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:14:05.298397   20834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:14:05.385232   20834 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:14:05.606246   20834 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:14:05.638924   20834 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:14:05.713739   20834 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0128 11:14:05.713904   20834 cli_runner.go:164] Run: docker exec -t old-k8s-version-867000 dig +short host.docker.internal
	I0128 11:14:05.839508   20834 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:14:05.839681   20834 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:14:05.844379   20834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:14:05.854842   20834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:14:05.915009   20834 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:14:05.915091   20834 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:14:05.941481   20834 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:14:05.941505   20834 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:14:05.941599   20834 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:14:05.966423   20834 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:14:05.966442   20834 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:14:05.966542   20834 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:14:06.040803   20834 cni.go:84] Creating CNI manager for ""
	I0128 11:14:06.040824   20834 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:14:06.040845   20834 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:14:06.040860   20834 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-867000 NodeName:old-k8s-version-867000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:14:06.041063   20834 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-867000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-867000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:14:06.041195   20834 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-867000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-867000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:14:06.041339   20834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0128 11:14:06.049930   20834 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:14:06.049999   20834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:14:06.057706   20834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0128 11:14:06.072007   20834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:14:06.087299   20834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0128 11:14:06.101055   20834 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:14:06.105161   20834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:14:06.115368   20834 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000 for IP: 192.168.76.2
	I0128 11:14:06.115404   20834 certs.go:186] acquiring lock for shared ca certs: {Name:mkee0a6d4b79657122da9b64494daa75cd779ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:14:06.115646   20834 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key
	I0128 11:14:06.115720   20834 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key
	I0128 11:14:06.115764   20834 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/client.key
	I0128 11:14:06.115781   20834 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/client.crt with IP's: []
	I0128 11:14:06.228928   20834 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/client.crt ...
	I0128 11:14:06.228943   20834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/client.crt: {Name:mkb3fdc745b9dc6f0f289131b70e0f27743cca81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:14:06.229285   20834 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/client.key ...
	I0128 11:14:06.229294   20834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/client.key: {Name:mkb0b27cfa70a612df49dfaa86a068a07bd23a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:14:06.229579   20834 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.key.31bdca25
	I0128 11:14:06.229599   20834 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 11:14:06.310379   20834 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.crt.31bdca25 ...
	I0128 11:14:06.310402   20834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.crt.31bdca25: {Name:mkbfb8c17515f917d29094d995f99fdde8375832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:14:06.310715   20834 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.key.31bdca25 ...
	I0128 11:14:06.310724   20834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.key.31bdca25: {Name:mk89f76483b098a70ccfe60227b4d25aa76b4380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:14:06.310911   20834 certs.go:333] copying /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.crt
	I0128 11:14:06.311080   20834 certs.go:337] copying /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.key
	I0128 11:14:06.311291   20834 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.key
	I0128 11:14:06.311307   20834 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.crt with IP's: []
	I0128 11:14:06.426795   20834 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.crt ...
	I0128 11:14:06.426810   20834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.crt: {Name:mk1149fc8926c3f2be0e15d27dd713ec1295f563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:14:06.427094   20834 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.key ...
	I0128 11:14:06.427107   20834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.key: {Name:mkbe4b1512e2656cf3334323a963863947836d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:14:06.427515   20834 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem (1338 bytes)
	W0128 11:14:06.427570   20834 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849_empty.pem, impossibly tiny 0 bytes
	I0128 11:14:06.427581   20834 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 11:14:06.427613   20834 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:14:06.427643   20834 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:14:06.427676   20834 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem (1679 bytes)
	I0128 11:14:06.427751   20834 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:14:06.428339   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:14:06.447711   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 11:14:06.466032   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:14:06.484971   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0128 11:14:06.502878   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:14:06.521072   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 11:14:06.539560   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:14:06.557885   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 11:14:06.575884   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /usr/share/ca-certificates/38492.pem (1708 bytes)
	I0128 11:14:06.593890   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:14:06.611988   20834 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem --> /usr/share/ca-certificates/3849.pem (1338 bytes)
	I0128 11:14:06.630936   20834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 11:14:06.644239   20834 ssh_runner.go:195] Run: openssl version
	I0128 11:14:06.650021   20834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38492.pem && ln -fs /usr/share/ca-certificates/38492.pem /etc/ssl/certs/38492.pem"
	I0128 11:14:06.658423   20834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38492.pem
	I0128 11:14:06.663018   20834 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:26 /usr/share/ca-certificates/38492.pem
	I0128 11:14:06.663070   20834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38492.pem
	I0128 11:14:06.668657   20834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38492.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:14:06.677437   20834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:14:06.685769   20834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:14:06.689985   20834 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:14:06.690054   20834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:14:06.695762   20834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:14:06.704009   20834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3849.pem && ln -fs /usr/share/ca-certificates/3849.pem /etc/ssl/certs/3849.pem"
	I0128 11:14:06.712602   20834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3849.pem
	I0128 11:14:06.716843   20834 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:26 /usr/share/ca-certificates/3849.pem
	I0128 11:14:06.716893   20834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3849.pem
	I0128 11:14:06.722465   20834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3849.pem /etc/ssl/certs/51391683.0"
	I0128 11:14:06.730956   20834 kubeadm.go:401] StartCluster: {Name:old-k8s-version-867000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-867000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:14:06.731117   20834 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:14:06.754808   20834 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:14:06.763862   20834 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:14:06.772447   20834 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:14:06.772510   20834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:14:06.781327   20834 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:14:06.781355   20834 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:14:06.835722   20834 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:14:06.835764   20834 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:14:07.139094   20834 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:14:07.139221   20834 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:14:07.139328   20834 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:14:07.379107   20834 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:14:07.379911   20834 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:14:07.386182   20834 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:14:07.464472   20834 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:14:07.488944   20834 out.go:204]   - Generating certificates and keys ...
	I0128 11:14:07.489012   20834 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:14:07.489087   20834 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:14:07.658417   20834 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 11:14:07.782655   20834 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 11:14:07.886308   20834 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 11:14:08.120743   20834 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 11:14:08.281837   20834 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 11:14:08.281946   20834 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-867000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0128 11:14:08.422000   20834 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 11:14:08.422142   20834 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-867000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0128 11:14:08.530181   20834 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 11:14:08.647033   20834 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 11:14:08.776931   20834 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 11:14:08.777019   20834 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:14:08.844247   20834 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:14:09.043866   20834 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:14:09.122644   20834 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:14:09.298572   20834 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:14:09.299109   20834 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:14:09.320937   20834 out.go:204]   - Booting up control plane ...
	I0128 11:14:09.321032   20834 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:14:09.321119   20834 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:14:09.321177   20834 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:14:09.321254   20834 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:14:09.321390   20834 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:14:49.308182   20834 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:14:49.308819   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:14:49.308970   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:14:54.309732   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:14:54.309895   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:15:04.311267   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:15:04.311486   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:15:24.313095   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:15:24.313353   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:16:04.314362   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:16:04.314605   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:16:04.314627   20834 kubeadm.go:322] 
	I0128 11:16:04.314674   20834 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:16:04.314721   20834 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:16:04.314733   20834 kubeadm.go:322] 
	I0128 11:16:04.314778   20834 kubeadm.go:322] This error is likely caused by:
	I0128 11:16:04.314827   20834 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:16:04.314970   20834 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:16:04.314987   20834 kubeadm.go:322] 
	I0128 11:16:04.315085   20834 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:16:04.315122   20834 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:16:04.315151   20834 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:16:04.315156   20834 kubeadm.go:322] 
	I0128 11:16:04.315277   20834 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:16:04.315384   20834 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:16:04.315479   20834 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:16:04.315537   20834 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:16:04.315614   20834 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:16:04.315667   20834 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:16:04.318134   20834 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:16:04.318205   20834 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:16:04.318304   20834 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:16:04.318386   20834 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:16:04.318465   20834 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:16:04.318531   20834 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 11:16:04.318710   20834 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-867000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-867000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-867000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-867000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 11:16:04.318740   20834 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:16:04.738232   20834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:16:04.751282   20834 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:16:04.751367   20834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:16:04.760445   20834 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:16:04.760472   20834 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:16:04.810670   20834 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:16:04.810733   20834 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:16:05.118458   20834 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:16:05.118555   20834 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:16:05.118637   20834 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:16:05.348374   20834 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:16:05.349090   20834 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:16:05.355789   20834 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:16:05.418751   20834 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:16:05.442967   20834 out.go:204]   - Generating certificates and keys ...
	I0128 11:16:05.443085   20834 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:16:05.443149   20834 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:16:05.443216   20834 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:16:05.443266   20834 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:16:05.443318   20834 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:16:05.443359   20834 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:16:05.443414   20834 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:16:05.443471   20834 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:16:05.443521   20834 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:16:05.443588   20834 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:16:05.443625   20834 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:16:05.443666   20834 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:16:05.888483   20834 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:16:06.303949   20834 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:16:06.410562   20834 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:16:06.639237   20834 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:16:06.639894   20834 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:16:06.683170   20834 out.go:204]   - Booting up control plane ...
	I0128 11:16:06.683291   20834 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:16:06.683377   20834 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:16:06.683470   20834 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:16:06.683551   20834 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:16:06.683701   20834 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:16:46.650523   20834 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:16:46.651332   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:16:46.651550   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:16:51.651976   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:16:51.652143   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:17:01.652951   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:17:01.653104   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:17:21.653436   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:17:21.653626   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:18:01.654715   20834 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:18:01.654923   20834 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:18:01.654934   20834 kubeadm.go:322] 
	I0128 11:18:01.655001   20834 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:18:01.655064   20834 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:18:01.655073   20834 kubeadm.go:322] 
	I0128 11:18:01.655131   20834 kubeadm.go:322] This error is likely caused by:
	I0128 11:18:01.655174   20834 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:18:01.655283   20834 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:18:01.655292   20834 kubeadm.go:322] 
	I0128 11:18:01.655393   20834 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:18:01.655432   20834 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:18:01.655467   20834 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:18:01.655476   20834 kubeadm.go:322] 
	I0128 11:18:01.655599   20834 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:18:01.655716   20834 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:18:01.655810   20834 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:18:01.655893   20834 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:18:01.656012   20834 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:18:01.656075   20834 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:18:01.658202   20834 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:18:01.658277   20834 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:18:01.658388   20834 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:18:01.658481   20834 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:18:01.658552   20834 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:18:01.658599   20834 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 11:18:01.658638   20834 kubeadm.go:403] StartCluster complete in 3m54.929924407s
	I0128 11:18:01.658726   20834 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:18:01.682347   20834 logs.go:279] 0 containers: []
	W0128 11:18:01.682361   20834 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:18:01.682454   20834 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:18:01.706128   20834 logs.go:279] 0 containers: []
	W0128 11:18:01.706141   20834 logs.go:281] No container was found matching "etcd"
	I0128 11:18:01.706210   20834 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:18:01.728908   20834 logs.go:279] 0 containers: []
	W0128 11:18:01.728921   20834 logs.go:281] No container was found matching "coredns"
	I0128 11:18:01.728996   20834 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:18:01.753167   20834 logs.go:279] 0 containers: []
	W0128 11:18:01.753181   20834 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:18:01.753250   20834 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:18:01.775979   20834 logs.go:279] 0 containers: []
	W0128 11:18:01.775992   20834 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:18:01.776061   20834 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:18:01.799423   20834 logs.go:279] 0 containers: []
	W0128 11:18:01.799438   20834 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:18:01.799523   20834 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:18:01.823708   20834 logs.go:279] 0 containers: []
	W0128 11:18:01.823722   20834 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:18:01.823794   20834 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:18:01.847374   20834 logs.go:279] 0 containers: []
	W0128 11:18:01.847386   20834 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:18:01.847393   20834 logs.go:124] Gathering logs for kubelet ...
	I0128 11:18:01.847403   20834 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:18:01.886235   20834 logs.go:124] Gathering logs for dmesg ...
	I0128 11:18:01.886254   20834 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:18:01.900408   20834 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:18:01.900422   20834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:18:01.955303   20834 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:18:01.955314   20834 logs.go:124] Gathering logs for Docker ...
	I0128 11:18:01.955320   20834 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:18:01.972921   20834 logs.go:124] Gathering logs for container status ...
	I0128 11:18:01.972935   20834 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:18:04.025251   20834 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052323622s)
	W0128 11:18:04.025386   20834 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 11:18:04.025401   20834 out.go:239] * 
	* 
	W0128 11:18:04.025521   20834 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:18:04.025534   20834 out.go:239] * 
	* 
	W0128 11:18:04.026182   20834 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 11:18:04.110508   20834 out.go:177] 
	W0128 11:18:04.153069   20834 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:18:04.153179   20834 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 11:18:04.153217   20834 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 11:18:04.174646   20834 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-867000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-867000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-867000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4",
	        "Created": "2023-01-28T19:14:00.935880886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284607,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:14:01.264450163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hosts",
	        "LogPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4-json.log",
	        "Name": "/old-k8s-version-867000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-867000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-867000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-867000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-867000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-867000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1965b08fee08b8af3afcb0cd99ff5e9095d1796192376cf6c580470e41c37ec4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55053"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55054"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55056"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1965b08fee08",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-867000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fd303c5a473",
	                        "old-k8s-version-867000"
	                    ],
	                    "NetworkID": "05da8fabe29d00d6e3eb58e11e2bbe3932ea7f3d437268a555d06945d4a9c8c9",
	                    "EndpointID": "135b64967c344fdb2ea21fbb73e05567bd790c79731b1337b03704ac6cb97d2a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 6 (424.627232ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:18:04.749541   21903 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-867000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-867000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (251.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-867000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-867000 create -f testdata/busybox.yaml: exit status 1 (35.667601ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-867000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-867000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-867000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-867000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4",
	        "Created": "2023-01-28T19:14:00.935880886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284607,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:14:01.264450163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hosts",
	        "LogPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4-json.log",
	        "Name": "/old-k8s-version-867000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-867000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-867000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-867000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-867000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-867000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1965b08fee08b8af3afcb0cd99ff5e9095d1796192376cf6c580470e41c37ec4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55053"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55054"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55056"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1965b08fee08",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-867000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fd303c5a473",
	                        "old-k8s-version-867000"
	                    ],
	                    "NetworkID": "05da8fabe29d00d6e3eb58e11e2bbe3932ea7f3d437268a555d06945d4a9c8c9",
	                    "EndpointID": "135b64967c344fdb2ea21fbb73e05567bd790c79731b1337b03704ac6cb97d2a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 6 (433.642537ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:18:05.280665   21918 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-867000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-867000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-867000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-867000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4",
	        "Created": "2023-01-28T19:14:00.935880886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284607,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:14:01.264450163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hosts",
	        "LogPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4-json.log",
	        "Name": "/old-k8s-version-867000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-867000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-867000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-867000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-867000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-867000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1965b08fee08b8af3afcb0cd99ff5e9095d1796192376cf6c580470e41c37ec4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55053"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55054"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55056"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1965b08fee08",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-867000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fd303c5a473",
	                        "old-k8s-version-867000"
	                    ],
	                    "NetworkID": "05da8fabe29d00d6e3eb58e11e2bbe3932ea7f3d437268a555d06945d4a9c8c9",
	                    "EndpointID": "135b64967c344fdb2ea21fbb73e05567bd790c79731b1337b03704ac6cb97d2a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 6 (421.707676ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:18:05.766729   21930 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-867000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-867000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-867000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0128 11:18:06.773119    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:06.779532    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:06.791756    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:06.812931    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:06.855324    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:06.937216    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:07.097576    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:07.417841    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:08.060132    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:09.342430    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:10.148107    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:18:11.902495    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:17.022937    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:22.115373    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:18:27.264132    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:30.628066    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:18:47.216000    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 11:18:47.280659    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 11:18:47.744015    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:18:52.546989    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:19:04.218715    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 11:19:08.177106    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:19:11.639033    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:19:16.636186    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 11:19:23.095706    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:23.102083    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:23.112984    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:23.135263    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:23.175662    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:23.255795    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:23.415921    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:23.736756    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:24.377550    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:25.658467    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:28.219719    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:28.810638    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:19:30.289464    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:19:33.339944    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-867000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.198378351s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-867000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-867000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-867000 describe deploy/metrics-server -n kube-system: exit status 1 (36.387629ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-867000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-867000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-867000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-867000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4",
	        "Created": "2023-01-28T19:14:00.935880886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284607,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:14:01.264450163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hosts",
	        "LogPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4-json.log",
	        "Name": "/old-k8s-version-867000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-867000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-867000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-867000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-867000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-867000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1965b08fee08b8af3afcb0cd99ff5e9095d1796192376cf6c580470e41c37ec4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55053"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55054"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55056"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1965b08fee08",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-867000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fd303c5a473",
	                        "old-k8s-version-867000"
	                    ],
	                    "NetworkID": "05da8fabe29d00d6e3eb58e11e2bbe3932ea7f3d437268a555d06945d4a9c8c9",
	                    "EndpointID": "135b64967c344fdb2ea21fbb73e05567bd790c79731b1337b03704ac6cb97d2a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 6 (422.371962ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:19:35.534204   22026 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-867000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-867000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (497.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-867000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0128 11:19:40.463436    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:19:43.581348    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:19:57.980010    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:20:04.062813    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:20:08.149495    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:20:33.559422    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:20:45.023040    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:20:50.730774    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:21:02.048082    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:21:08.751737    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:21:24.332577    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:21:36.437834    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:21:52.017216    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:22:06.944256    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:22:49.707276    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:22:54.483625    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:23:06.824128    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:23:17.399590    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:23:30.418253    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 11:23:34.572838    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
E0128 11:23:47.331211    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 11:24:04.219197    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 11:24:16.635907    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 11:24:23.095553    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:24:30.289120    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:24:40.463452    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:24:50.784364    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-867000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m12.898051585s)

                                                
                                                
-- stdout --
	* [old-k8s-version-867000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-867000 in cluster old-k8s-version-867000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-867000" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 11:19:37.596720   22056 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:19:37.596882   22056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:19:37.596888   22056 out.go:309] Setting ErrFile to fd 2...
	I0128 11:19:37.596891   22056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:19:37.597005   22056 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 11:19:37.597492   22056 out.go:303] Setting JSON to false
	I0128 11:19:37.616322   22056 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4752,"bootTime":1674928825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 11:19:37.616425   22056 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:19:37.638350   22056 out.go:177] * [old-k8s-version-867000] minikube v1.29.0 on Darwin 13.2
	I0128 11:19:37.680691   22056 notify.go:220] Checking for updates...
	I0128 11:19:37.702692   22056 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:19:37.723915   22056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:19:37.745718   22056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:19:37.766857   22056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:19:37.808638   22056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 11:19:37.872597   22056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:19:37.894459   22056 config.go:180] Loaded profile config "old-k8s-version-867000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 11:19:37.916669   22056 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0128 11:19:37.937544   22056 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:19:38.001699   22056 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:19:38.001830   22056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:19:38.144862   22056 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 19:19:38.051725184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:19:38.187380   22056 out.go:177] * Using the docker driver based on existing profile
	I0128 11:19:38.208317   22056 start.go:296] selected driver: docker
	I0128 11:19:38.208369   22056 start.go:857] validating driver "docker" against &{Name:old-k8s-version-867000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-867000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:19:38.208470   22056 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:19:38.211844   22056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:19:38.356249   22056 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 19:19:38.264120827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:19:38.356413   22056 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 11:19:38.356432   22056 cni.go:84] Creating CNI manager for ""
	I0128 11:19:38.356442   22056 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:19:38.356452   22056 start_flags.go:319] config:
	{Name:old-k8s-version-867000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-867000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:19:38.378483   22056 out.go:177] * Starting control plane node old-k8s-version-867000 in cluster old-k8s-version-867000
	I0128 11:19:38.400039   22056 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:19:38.420812   22056 out.go:177] * Pulling base image ...
	I0128 11:19:38.463106   22056 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:19:38.463127   22056 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:19:38.463190   22056 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 11:19:38.463209   22056 cache.go:57] Caching tarball of preloaded images
	I0128 11:19:38.464089   22056 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:19:38.464221   22056 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 11:19:38.464639   22056 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/config.json ...
	I0128 11:19:38.519841   22056 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:19:38.519859   22056 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:19:38.519878   22056 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:19:38.519926   22056 start.go:364] acquiring machines lock for old-k8s-version-867000: {Name:mk6bff3692844ef15630a267932d689c213153ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:19:38.520020   22056 start.go:368] acquired machines lock for "old-k8s-version-867000" in 74.5µs
	I0128 11:19:38.520043   22056 start.go:96] Skipping create...Using existing machine configuration
	I0128 11:19:38.520054   22056 fix.go:55] fixHost starting: 
	I0128 11:19:38.520291   22056 cli_runner.go:164] Run: docker container inspect old-k8s-version-867000 --format={{.State.Status}}
	I0128 11:19:38.579024   22056 fix.go:103] recreateIfNeeded on old-k8s-version-867000: state=Stopped err=<nil>
	W0128 11:19:38.579058   22056 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 11:19:38.622751   22056 out.go:177] * Restarting existing docker container for "old-k8s-version-867000" ...
	I0128 11:19:38.644638   22056 cli_runner.go:164] Run: docker start old-k8s-version-867000
	I0128 11:19:38.989424   22056 cli_runner.go:164] Run: docker container inspect old-k8s-version-867000 --format={{.State.Status}}
	I0128 11:19:39.052218   22056 kic.go:426] container "old-k8s-version-867000" state is running.
	I0128 11:19:39.052776   22056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-867000
	I0128 11:19:39.118095   22056 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/config.json ...
	I0128 11:19:39.118978   22056 machine.go:88] provisioning docker machine ...
	I0128 11:19:39.119011   22056 ubuntu.go:169] provisioning hostname "old-k8s-version-867000"
	I0128 11:19:39.119093   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:39.194052   22056 main.go:141] libmachine: Using SSH client type: native
	I0128 11:19:39.194270   22056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55320 <nil> <nil>}
	I0128 11:19:39.194290   22056 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-867000 && echo "old-k8s-version-867000" | sudo tee /etc/hostname
	I0128 11:19:39.337716   22056 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-867000
	
	I0128 11:19:39.337797   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:39.400128   22056 main.go:141] libmachine: Using SSH client type: native
	I0128 11:19:39.400300   22056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55320 <nil> <nil>}
	I0128 11:19:39.400315   22056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-867000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-867000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-867000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:19:39.538386   22056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:19:39.538406   22056 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2556/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2556/.minikube}
	I0128 11:19:39.538429   22056 ubuntu.go:177] setting up certificates
	I0128 11:19:39.538436   22056 provision.go:83] configureAuth start
	I0128 11:19:39.538521   22056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-867000
	I0128 11:19:39.601009   22056 provision.go:138] copyHostCerts
	I0128 11:19:39.601125   22056 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem, removing ...
	I0128 11:19:39.601135   22056 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 11:19:39.601234   22056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem (1082 bytes)
	I0128 11:19:39.601445   22056 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem, removing ...
	I0128 11:19:39.601454   22056 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 11:19:39.601520   22056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem (1123 bytes)
	I0128 11:19:39.601705   22056 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem, removing ...
	I0128 11:19:39.601711   22056 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 11:19:39.601786   22056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem (1679 bytes)
	I0128 11:19:39.601946   22056 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-867000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-867000]
	I0128 11:19:39.855797   22056 provision.go:172] copyRemoteCerts
	I0128 11:19:39.855866   22056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:19:39.855932   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:39.915681   22056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55320 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:19:40.009472   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 11:19:40.027348   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:19:40.044687   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0128 11:19:40.062379   22056 provision.go:86] duration metric: configureAuth took 523.928971ms
	I0128 11:19:40.062393   22056 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:19:40.062609   22056 config.go:180] Loaded profile config "old-k8s-version-867000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 11:19:40.062736   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:40.123151   22056 main.go:141] libmachine: Using SSH client type: native
	I0128 11:19:40.123321   22056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55320 <nil> <nil>}
	I0128 11:19:40.123335   22056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:19:40.256977   22056 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:19:40.256990   22056 ubuntu.go:71] root file system type: overlay
	I0128 11:19:40.257162   22056 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:19:40.257252   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:40.317866   22056 main.go:141] libmachine: Using SSH client type: native
	I0128 11:19:40.318032   22056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55320 <nil> <nil>}
	I0128 11:19:40.318088   22056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:19:40.460430   22056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:19:40.460538   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:40.520351   22056 main.go:141] libmachine: Using SSH client type: native
	I0128 11:19:40.520512   22056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55320 <nil> <nil>}
	I0128 11:19:40.520525   22056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:19:40.662132   22056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:19:40.662147   22056 machine.go:91] provisioned docker machine in 1.543157763s
	I0128 11:19:40.662155   22056 start.go:300] post-start starting for "old-k8s-version-867000" (driver="docker")
	I0128 11:19:40.662172   22056 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:19:40.662252   22056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:19:40.662309   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:40.722035   22056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55320 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:19:40.817219   22056 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:19:40.820916   22056 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:19:40.820932   22056 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:19:40.820946   22056 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:19:40.820951   22056 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:19:40.820959   22056 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/addons for local assets ...
	I0128 11:19:40.821067   22056 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/files for local assets ...
	I0128 11:19:40.821250   22056 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> 38492.pem in /etc/ssl/certs
	I0128 11:19:40.821441   22056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:19:40.828858   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:19:40.846328   22056 start.go:303] post-start completed in 184.153896ms
	I0128 11:19:40.846404   22056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:19:40.846460   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:40.906603   22056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55320 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:19:40.998081   22056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:19:41.002841   22056 fix.go:57] fixHost completed within 2.482786105s
	I0128 11:19:41.002865   22056 start.go:83] releasing machines lock for "old-k8s-version-867000", held for 2.482837025s
	I0128 11:19:41.002976   22056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-867000
	I0128 11:19:41.062953   22056 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0128 11:19:41.062953   22056 ssh_runner.go:195] Run: cat /version.json
	I0128 11:19:41.063053   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:41.063053   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:41.130367   22056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55320 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:19:41.130576   22056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55320 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/old-k8s-version-867000/id_rsa Username:docker}
	I0128 11:19:41.422065   22056 ssh_runner.go:195] Run: systemctl --version
	I0128 11:19:41.426905   22056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0128 11:19:41.431601   22056 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0128 11:19:41.431657   22056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 11:19:41.439714   22056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 11:19:41.447873   22056 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0128 11:19:41.447891   22056 start.go:483] detecting cgroup driver to use...
	I0128 11:19:41.447904   22056 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:19:41.448052   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:19:41.461187   22056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0128 11:19:41.469972   22056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:19:41.478516   22056 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:19:41.478574   22056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:19:41.488342   22056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:19:41.497439   22056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:19:41.505843   22056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:19:41.514335   22056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:19:41.522306   22056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:19:41.531283   22056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:19:41.538762   22056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:19:41.545889   22056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:19:41.617248   22056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:19:41.690674   22056 start.go:483] detecting cgroup driver to use...
	I0128 11:19:41.690693   22056 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:19:41.690759   22056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:19:41.701878   22056 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:19:41.702028   22056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:19:41.713737   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:19:41.728915   22056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:19:41.835933   22056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:19:41.920450   22056 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:19:41.920466   22056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:19:41.934487   22056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:19:42.034579   22056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:19:42.235869   22056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:19:42.267922   22056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:19:42.339573   22056 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0128 11:19:42.339764   22056 cli_runner.go:164] Run: docker exec -t old-k8s-version-867000 dig +short host.docker.internal
	I0128 11:19:42.459503   22056 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:19:42.459611   22056 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:19:42.464222   22056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:19:42.474388   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:42.535019   22056 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:19:42.535125   22056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:19:42.559665   22056 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:19:42.559682   22056 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:19:42.559763   22056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:19:42.584256   22056 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:19:42.584276   22056 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:19:42.584363   22056 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:19:42.657404   22056 cni.go:84] Creating CNI manager for ""
	I0128 11:19:42.657421   22056 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:19:42.657435   22056 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:19:42.657451   22056 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-867000 NodeName:old-k8s-version-867000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:19:42.657577   22056 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-867000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-867000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:19:42.657660   22056 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-867000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-867000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:19:42.657736   22056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0128 11:19:42.665956   22056 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:19:42.666027   22056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:19:42.674307   22056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0128 11:19:42.688333   22056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:19:42.701961   22056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0128 11:19:42.715495   22056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:19:42.719671   22056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:19:42.729865   22056 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000 for IP: 192.168.76.2
	I0128 11:19:42.729885   22056 certs.go:186] acquiring lock for shared ca certs: {Name:mkee0a6d4b79657122da9b64494daa75cd779ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:19:42.730084   22056 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key
	I0128 11:19:42.730161   22056 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key
	I0128 11:19:42.730262   22056 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/client.key
	I0128 11:19:42.730343   22056 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.key.31bdca25
	I0128 11:19:42.730410   22056 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.key
	I0128 11:19:42.730646   22056 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem (1338 bytes)
	W0128 11:19:42.730692   22056 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849_empty.pem, impossibly tiny 0 bytes
	I0128 11:19:42.730703   22056 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 11:19:42.730744   22056 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:19:42.730784   22056 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:19:42.730816   22056 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem (1679 bytes)
	I0128 11:19:42.730885   22056 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:19:42.731467   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:19:42.749356   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 11:19:42.767587   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:19:42.786172   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/old-k8s-version-867000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0128 11:19:42.804618   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:19:42.822664   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 11:19:42.841051   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:19:42.858667   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 11:19:42.876360   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /usr/share/ca-certificates/38492.pem (1708 bytes)
	I0128 11:19:42.894655   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:19:42.912718   22056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem --> /usr/share/ca-certificates/3849.pem (1338 bytes)
	I0128 11:19:42.931353   22056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 11:19:42.945326   22056 ssh_runner.go:195] Run: openssl version
	I0128 11:19:42.950996   22056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3849.pem && ln -fs /usr/share/ca-certificates/3849.pem /etc/ssl/certs/3849.pem"
	I0128 11:19:42.959897   22056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3849.pem
	I0128 11:19:42.963996   22056 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:26 /usr/share/ca-certificates/3849.pem
	I0128 11:19:42.964043   22056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3849.pem
	I0128 11:19:42.969549   22056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3849.pem /etc/ssl/certs/51391683.0"
	I0128 11:19:42.977525   22056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38492.pem && ln -fs /usr/share/ca-certificates/38492.pem /etc/ssl/certs/38492.pem"
	I0128 11:19:42.985911   22056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38492.pem
	I0128 11:19:42.990358   22056 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:26 /usr/share/ca-certificates/38492.pem
	I0128 11:19:42.990415   22056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38492.pem
	I0128 11:19:42.996336   22056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38492.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:19:43.003991   22056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:19:43.012429   22056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:19:43.016525   22056 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:19:43.016573   22056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:19:43.022590   22056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:19:43.030474   22056 kubeadm.go:401] StartCluster: {Name:old-k8s-version-867000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-867000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:19:43.030613   22056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:19:43.055550   22056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:19:43.063887   22056 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:19:43.063904   22056 kubeadm.go:633] restartCluster start
	I0128 11:19:43.063960   22056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:19:43.071204   22056 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:43.071274   22056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-867000
	I0128 11:19:43.132972   22056 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-867000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:19:43.133152   22056 kubeconfig.go:146] "old-k8s-version-867000" context is missing from /Users/jenkins/minikube-integration/15565-2556/kubeconfig - will repair!
	I0128 11:19:43.133463   22056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/kubeconfig: {Name:mk9285754a110019f97a480561fbfd0056cc86f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:19:43.134818   22056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:19:43.143290   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:43.143348   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:43.152738   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:43.653080   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:43.653174   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:43.663063   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:44.154010   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:44.154193   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:44.165074   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:44.654907   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:44.655065   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:44.666173   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:45.153040   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:45.153149   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:45.164558   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:45.652855   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:45.653010   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:45.663813   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:46.152901   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:46.153111   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:46.163909   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:46.652843   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:46.652982   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:46.664012   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:47.153080   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:47.153212   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:47.164061   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:47.654839   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:47.655115   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:47.666483   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:48.152857   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:48.153007   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:48.163873   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:48.653284   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:48.653499   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:48.664514   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:49.153222   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:49.153368   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:49.163098   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:49.653366   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:49.653594   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:49.664954   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:50.153337   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:50.153470   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:50.164619   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:50.653070   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:50.653264   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:50.664624   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:51.153595   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:51.153706   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:51.164490   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:51.653898   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:51.654069   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:51.664634   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:52.152924   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:52.153037   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:52.164148   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:52.654861   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:52.655098   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:52.666115   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:53.153163   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:53.153296   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:53.164320   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:53.164330   22056 api_server.go:165] Checking apiserver status ...
	I0128 11:19:53.164382   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:19:53.172826   22056 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:19:53.172839   22056 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0128 11:19:53.172847   22056 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:19:53.172915   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:19:53.196544   22056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:19:53.207252   22056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:19:53.215170   22056 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Jan 28 19:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Jan 28 19:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Jan 28 19:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Jan 28 19:16 /etc/kubernetes/scheduler.conf
	
	I0128 11:19:53.215238   22056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:19:53.222862   22056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:19:53.230410   22056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:19:53.238154   22056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:19:53.245694   22056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:19:53.253806   22056 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:19:53.253823   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:19:53.309462   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:19:53.644061   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:19:53.859294   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:19:53.921585   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:19:54.005759   22056 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:19:54.005885   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:54.515493   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:55.015814   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:55.515295   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:56.015991   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:56.516056   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:57.015455   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:57.515240   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:58.015594   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:58.515333   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:59.015422   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:19:59.515317   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:00.015313   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:00.515368   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:01.016075   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:01.515667   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:02.015250   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:02.515889   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:03.015509   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:03.515957   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:04.015975   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:04.515214   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:05.016275   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:05.515406   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:06.015264   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:06.515430   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:07.015567   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:07.515243   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:08.015263   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:08.515539   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:09.015335   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:09.515467   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:10.016393   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:10.515209   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:11.015328   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:11.516058   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:12.015261   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:12.516485   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:13.015391   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:13.515240   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:14.015429   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:14.515332   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:15.016541   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:15.515625   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:16.015539   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:16.515318   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:17.015335   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:17.515385   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:18.015608   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:18.515255   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:19.015241   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:19.515195   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:20.015471   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:20.515259   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:21.015413   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:21.515829   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:22.015260   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:22.515207   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:23.015322   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:23.516034   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:24.015561   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:24.515838   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:25.015539   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:25.515226   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:26.015577   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:26.515337   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:27.015213   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:27.515254   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:28.017327   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:28.515229   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:29.015242   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:29.515335   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:30.015244   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:30.515298   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:31.015826   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:31.515168   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:32.015542   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:32.515200   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:33.015760   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:33.515358   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:34.015867   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:34.515426   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:35.015258   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:35.515389   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:36.015633   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:36.515256   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:37.015522   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:37.515469   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:38.015264   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:38.515459   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:39.015177   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:39.515361   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:40.015356   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:40.515225   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:41.015550   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:41.515875   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:42.015320   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:42.515242   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:43.015386   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:43.515984   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:44.015203   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:44.515224   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:45.015334   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:45.515527   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:46.015357   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:46.515160   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:47.015247   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:47.515652   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:48.015410   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:48.515405   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:49.015518   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:49.515527   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:50.016044   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:50.515277   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:51.015498   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:51.515684   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:52.015225   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:52.515240   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:53.015169   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:53.515384   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:54.015681   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:20:54.050922   22056 logs.go:279] 0 containers: []
	W0128 11:20:54.050936   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:20:54.051005   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:20:54.080080   22056 logs.go:279] 0 containers: []
	W0128 11:20:54.080094   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:20:54.080164   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:20:54.105729   22056 logs.go:279] 0 containers: []
	W0128 11:20:54.105746   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:20:54.105839   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:20:54.129505   22056 logs.go:279] 0 containers: []
	W0128 11:20:54.129518   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:20:54.129600   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:20:54.152600   22056 logs.go:279] 0 containers: []
	W0128 11:20:54.152614   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:20:54.152686   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:20:54.175911   22056 logs.go:279] 0 containers: []
	W0128 11:20:54.175926   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:20:54.176002   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:20:54.199536   22056 logs.go:279] 0 containers: []
	W0128 11:20:54.199549   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:20:54.199629   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:20:54.223344   22056 logs.go:279] 0 containers: []
	W0128 11:20:54.223357   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:20:54.223363   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:20:54.223370   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:20:54.262840   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:20:54.262855   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:20:54.275907   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:20:54.275922   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:20:54.331320   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:20:54.331334   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:20:54.331340   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:20:54.347280   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:20:54.347296   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:20:56.394763   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047455099s)
	I0128 11:20:58.896629   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:20:59.015953   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:20:59.041325   22056 logs.go:279] 0 containers: []
	W0128 11:20:59.041338   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:20:59.041410   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:20:59.066132   22056 logs.go:279] 0 containers: []
	W0128 11:20:59.066144   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:20:59.066211   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:20:59.089496   22056 logs.go:279] 0 containers: []
	W0128 11:20:59.089510   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:20:59.089578   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:20:59.112082   22056 logs.go:279] 0 containers: []
	W0128 11:20:59.112096   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:20:59.112170   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:20:59.135545   22056 logs.go:279] 0 containers: []
	W0128 11:20:59.135558   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:20:59.135625   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:20:59.159848   22056 logs.go:279] 0 containers: []
	W0128 11:20:59.159865   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:20:59.159940   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:20:59.184255   22056 logs.go:279] 0 containers: []
	W0128 11:20:59.184271   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:20:59.184341   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:20:59.209251   22056 logs.go:279] 0 containers: []
	W0128 11:20:59.209268   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:20:59.209277   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:20:59.209286   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:01.260996   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051700328s)
	I0128 11:21:01.261150   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:01.261159   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:01.298399   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:01.298416   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:01.311229   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:01.311242   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:01.366765   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:01.366777   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:01.366784   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:03.884866   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:04.015238   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:04.040769   22056 logs.go:279] 0 containers: []
	W0128 11:21:04.040783   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:04.040866   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:04.063627   22056 logs.go:279] 0 containers: []
	W0128 11:21:04.063640   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:04.063710   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:04.086814   22056 logs.go:279] 0 containers: []
	W0128 11:21:04.086828   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:04.086897   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:04.109720   22056 logs.go:279] 0 containers: []
	W0128 11:21:04.109734   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:04.109816   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:04.132435   22056 logs.go:279] 0 containers: []
	W0128 11:21:04.132447   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:04.132520   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:04.156447   22056 logs.go:279] 0 containers: []
	W0128 11:21:04.156463   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:04.156533   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:04.179793   22056 logs.go:279] 0 containers: []
	W0128 11:21:04.179812   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:04.179888   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:04.203652   22056 logs.go:279] 0 containers: []
	W0128 11:21:04.203665   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:04.203672   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:04.203684   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:04.242530   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:04.242545   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:04.254755   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:04.254771   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:04.309693   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:04.309710   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:04.309716   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:04.325997   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:04.326009   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:06.377958   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051939681s)
	I0128 11:21:08.880346   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:09.015237   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:09.039587   22056 logs.go:279] 0 containers: []
	W0128 11:21:09.039600   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:09.039679   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:09.064236   22056 logs.go:279] 0 containers: []
	W0128 11:21:09.064248   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:09.064320   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:09.086644   22056 logs.go:279] 0 containers: []
	W0128 11:21:09.086658   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:09.086738   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:09.109547   22056 logs.go:279] 0 containers: []
	W0128 11:21:09.109560   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:09.109634   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:09.132772   22056 logs.go:279] 0 containers: []
	W0128 11:21:09.132789   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:09.132865   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:09.156495   22056 logs.go:279] 0 containers: []
	W0128 11:21:09.156510   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:09.156578   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:09.180468   22056 logs.go:279] 0 containers: []
	W0128 11:21:09.180482   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:09.180554   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:09.205202   22056 logs.go:279] 0 containers: []
	W0128 11:21:09.205216   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:09.205223   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:09.205230   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:09.245454   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:09.245468   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:09.258422   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:09.258438   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:09.316226   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:09.316239   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:09.316246   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:09.332216   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:09.332229   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:11.385109   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052870109s)
	I0128 11:21:13.885467   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:14.015415   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:14.041288   22056 logs.go:279] 0 containers: []
	W0128 11:21:14.041302   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:14.041374   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:14.067308   22056 logs.go:279] 0 containers: []
	W0128 11:21:14.067323   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:14.067411   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:14.091972   22056 logs.go:279] 0 containers: []
	W0128 11:21:14.091986   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:14.092063   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:14.119577   22056 logs.go:279] 0 containers: []
	W0128 11:21:14.119598   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:14.119685   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:14.145598   22056 logs.go:279] 0 containers: []
	W0128 11:21:14.145612   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:14.145685   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:14.170675   22056 logs.go:279] 0 containers: []
	W0128 11:21:14.170693   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:14.170780   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:14.198725   22056 logs.go:279] 0 containers: []
	W0128 11:21:14.198743   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:14.198819   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:14.224347   22056 logs.go:279] 0 containers: []
	W0128 11:21:14.224360   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:14.224366   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:14.224376   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:14.267552   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:14.267574   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:14.281404   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:14.281423   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:14.345205   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:14.345225   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:14.345232   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:14.362152   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:14.362172   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:16.413481   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051296542s)
	I0128 11:21:18.913899   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:19.015347   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:19.041905   22056 logs.go:279] 0 containers: []
	W0128 11:21:19.041919   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:19.041988   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:19.065507   22056 logs.go:279] 0 containers: []
	W0128 11:21:19.065522   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:19.065633   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:19.091214   22056 logs.go:279] 0 containers: []
	W0128 11:21:19.091237   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:19.091328   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:19.121094   22056 logs.go:279] 0 containers: []
	W0128 11:21:19.121111   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:19.121200   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:19.148062   22056 logs.go:279] 0 containers: []
	W0128 11:21:19.148077   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:19.148157   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:19.173889   22056 logs.go:279] 0 containers: []
	W0128 11:21:19.173902   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:19.173972   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:19.201005   22056 logs.go:279] 0 containers: []
	W0128 11:21:19.201020   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:19.201107   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:19.238308   22056 logs.go:279] 0 containers: []
	W0128 11:21:19.238323   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:19.238330   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:19.238339   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:19.285434   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:19.285455   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:19.299868   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:19.299884   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:19.369638   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:19.369678   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:19.369685   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:19.386296   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:19.386311   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:21.438263   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051942333s)
	I0128 11:21:23.938923   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:24.015193   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:24.040428   22056 logs.go:279] 0 containers: []
	W0128 11:21:24.040441   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:24.040533   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:24.064223   22056 logs.go:279] 0 containers: []
	W0128 11:21:24.064239   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:24.064309   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:24.087912   22056 logs.go:279] 0 containers: []
	W0128 11:21:24.087926   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:24.087999   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:24.113059   22056 logs.go:279] 0 containers: []
	W0128 11:21:24.113101   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:24.113185   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:24.138315   22056 logs.go:279] 0 containers: []
	W0128 11:21:24.138349   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:24.138475   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:24.162322   22056 logs.go:279] 0 containers: []
	W0128 11:21:24.162336   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:24.162409   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:24.185254   22056 logs.go:279] 0 containers: []
	W0128 11:21:24.185288   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:24.185392   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:24.210189   22056 logs.go:279] 0 containers: []
	W0128 11:21:24.210203   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:24.210211   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:24.210219   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:24.271392   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:24.271412   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:24.271419   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:24.287736   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:24.287751   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:26.340375   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05259871s)
	I0128 11:21:26.340555   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:26.340568   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:26.379748   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:26.379764   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:28.893158   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:29.016013   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:29.040064   22056 logs.go:279] 0 containers: []
	W0128 11:21:29.040080   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:29.040151   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:29.064205   22056 logs.go:279] 0 containers: []
	W0128 11:21:29.064219   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:29.064291   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:29.088965   22056 logs.go:279] 0 containers: []
	W0128 11:21:29.088979   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:29.089050   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:29.113056   22056 logs.go:279] 0 containers: []
	W0128 11:21:29.113069   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:29.113138   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:29.137030   22056 logs.go:279] 0 containers: []
	W0128 11:21:29.137043   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:29.137112   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:29.161381   22056 logs.go:279] 0 containers: []
	W0128 11:21:29.161429   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:29.161499   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:29.185797   22056 logs.go:279] 0 containers: []
	W0128 11:21:29.185811   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:29.185881   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:29.213822   22056 logs.go:279] 0 containers: []
	W0128 11:21:29.213838   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:29.213848   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:29.213858   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:29.233680   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:29.233698   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:31.286330   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05262226s)
	I0128 11:21:31.286444   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:31.286452   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:31.326964   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:31.326984   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:31.339872   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:31.339884   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:31.396708   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:33.898962   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:34.016131   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:34.042503   22056 logs.go:279] 0 containers: []
	W0128 11:21:34.042517   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:34.042588   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:34.065607   22056 logs.go:279] 0 containers: []
	W0128 11:21:34.065620   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:34.065686   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:34.090189   22056 logs.go:279] 0 containers: []
	W0128 11:21:34.090204   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:34.090277   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:34.115877   22056 logs.go:279] 0 containers: []
	W0128 11:21:34.115890   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:34.115958   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:34.139839   22056 logs.go:279] 0 containers: []
	W0128 11:21:34.139852   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:34.139925   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:34.164203   22056 logs.go:279] 0 containers: []
	W0128 11:21:34.164218   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:34.164288   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:34.188169   22056 logs.go:279] 0 containers: []
	W0128 11:21:34.188182   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:34.188265   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:34.214404   22056 logs.go:279] 0 containers: []
	W0128 11:21:34.214419   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:34.214425   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:34.214433   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:34.234473   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:34.234490   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:36.288120   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053618151s)
	I0128 11:21:36.288273   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:36.288284   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:36.339801   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:36.339818   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:36.352344   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:36.352364   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:36.430182   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:38.930327   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:39.015146   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:39.039372   22056 logs.go:279] 0 containers: []
	W0128 11:21:39.039386   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:39.039470   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:39.065114   22056 logs.go:279] 0 containers: []
	W0128 11:21:39.065128   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:39.065200   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:39.091472   22056 logs.go:279] 0 containers: []
	W0128 11:21:39.091485   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:39.091553   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:39.125213   22056 logs.go:279] 0 containers: []
	W0128 11:21:39.125228   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:39.125315   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:39.153287   22056 logs.go:279] 0 containers: []
	W0128 11:21:39.153305   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:39.153397   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:39.183502   22056 logs.go:279] 0 containers: []
	W0128 11:21:39.183521   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:39.183607   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:39.212639   22056 logs.go:279] 0 containers: []
	W0128 11:21:39.212652   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:39.212724   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:39.240695   22056 logs.go:279] 0 containers: []
	W0128 11:21:39.240708   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:39.240715   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:39.240723   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:39.253448   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:39.253465   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:39.318831   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:39.318845   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:39.318854   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:39.334944   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:39.334957   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:41.408617   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.07364507s)
	I0128 11:21:41.408792   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:41.408806   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:43.966833   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:44.015174   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:44.042028   22056 logs.go:279] 0 containers: []
	W0128 11:21:44.042042   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:44.042112   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:44.064400   22056 logs.go:279] 0 containers: []
	W0128 11:21:44.064414   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:44.064483   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:44.091635   22056 logs.go:279] 0 containers: []
	W0128 11:21:44.091648   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:44.091725   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:44.120576   22056 logs.go:279] 0 containers: []
	W0128 11:21:44.120589   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:44.120657   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:44.148862   22056 logs.go:279] 0 containers: []
	W0128 11:21:44.148876   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:44.148962   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:44.173996   22056 logs.go:279] 0 containers: []
	W0128 11:21:44.174012   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:44.174085   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:44.204345   22056 logs.go:279] 0 containers: []
	W0128 11:21:44.204363   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:44.204459   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:44.240849   22056 logs.go:279] 0 containers: []
	W0128 11:21:44.240865   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:44.240872   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:44.240880   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:44.285009   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:44.285031   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:44.298687   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:44.298704   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:44.369229   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:44.369258   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:44.369284   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:44.386213   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:44.386230   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:46.463393   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.077152197s)
	I0128 11:21:48.965087   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:49.015334   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:49.040135   22056 logs.go:279] 0 containers: []
	W0128 11:21:49.040149   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:49.040219   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:49.063210   22056 logs.go:279] 0 containers: []
	W0128 11:21:49.063223   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:49.063292   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:49.086621   22056 logs.go:279] 0 containers: []
	W0128 11:21:49.086633   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:49.086730   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:49.111586   22056 logs.go:279] 0 containers: []
	W0128 11:21:49.111602   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:49.111673   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:49.139824   22056 logs.go:279] 0 containers: []
	W0128 11:21:49.139848   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:49.139995   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:49.165121   22056 logs.go:279] 0 containers: []
	W0128 11:21:49.165135   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:49.165218   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:49.189889   22056 logs.go:279] 0 containers: []
	W0128 11:21:49.189904   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:49.189996   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:49.214098   22056 logs.go:279] 0 containers: []
	W0128 11:21:49.214118   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:49.214128   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:49.214137   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:49.260152   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:49.260170   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:49.273451   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:49.273465   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:49.334278   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:49.334293   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:49.334300   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:49.351428   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:49.351445   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:51.402402   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050947809s)
	I0128 11:21:53.902718   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:54.016185   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:54.040932   22056 logs.go:279] 0 containers: []
	W0128 11:21:54.040945   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:54.041017   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:54.064759   22056 logs.go:279] 0 containers: []
	W0128 11:21:54.064773   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:54.064858   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:54.088474   22056 logs.go:279] 0 containers: []
	W0128 11:21:54.088489   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:54.088561   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:54.112102   22056 logs.go:279] 0 containers: []
	W0128 11:21:54.112116   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:54.112189   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:54.136273   22056 logs.go:279] 0 containers: []
	W0128 11:21:54.136288   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:54.136364   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:54.161349   22056 logs.go:279] 0 containers: []
	W0128 11:21:54.161362   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:54.161431   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:54.184861   22056 logs.go:279] 0 containers: []
	W0128 11:21:54.184873   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:54.184946   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:54.208290   22056 logs.go:279] 0 containers: []
	W0128 11:21:54.208322   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:54.208329   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:21:54.208336   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:21:54.250254   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:21:54.250269   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:21:54.263182   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:54.263200   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:54.323335   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:54.323347   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:54.323353   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:54.342786   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:54.342805   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:21:56.393384   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050569495s)
	I0128 11:21:58.894323   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:21:59.015255   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:21:59.040753   22056 logs.go:279] 0 containers: []
	W0128 11:21:59.040767   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:21:59.040835   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:21:59.064950   22056 logs.go:279] 0 containers: []
	W0128 11:21:59.064964   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:21:59.065034   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:21:59.088779   22056 logs.go:279] 0 containers: []
	W0128 11:21:59.088794   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:21:59.088865   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:21:59.112658   22056 logs.go:279] 0 containers: []
	W0128 11:21:59.112671   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:21:59.112743   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:21:59.136094   22056 logs.go:279] 0 containers: []
	W0128 11:21:59.136108   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:21:59.136179   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:21:59.160382   22056 logs.go:279] 0 containers: []
	W0128 11:21:59.160396   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:21:59.160518   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:21:59.185945   22056 logs.go:279] 0 containers: []
	W0128 11:21:59.185958   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:21:59.186030   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:21:59.211219   22056 logs.go:279] 0 containers: []
	W0128 11:21:59.211233   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:21:59.211240   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:21:59.211247   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:21:59.293215   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:21:59.293226   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:21:59.293232   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:21:59.309401   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:21:59.309414   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:01.359076   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049652556s)
	I0128 11:22:01.359188   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:01.359194   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:01.398913   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:01.398927   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:03.912185   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:04.016754   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:04.041799   22056 logs.go:279] 0 containers: []
	W0128 11:22:04.041812   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:04.041884   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:04.066636   22056 logs.go:279] 0 containers: []
	W0128 11:22:04.066650   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:04.066724   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:04.090069   22056 logs.go:279] 0 containers: []
	W0128 11:22:04.090083   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:04.090167   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:04.113286   22056 logs.go:279] 0 containers: []
	W0128 11:22:04.113300   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:04.113368   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:04.136896   22056 logs.go:279] 0 containers: []
	W0128 11:22:04.136910   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:04.136982   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:04.160302   22056 logs.go:279] 0 containers: []
	W0128 11:22:04.160316   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:04.160385   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:04.184734   22056 logs.go:279] 0 containers: []
	W0128 11:22:04.184752   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:04.184825   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:04.208582   22056 logs.go:279] 0 containers: []
	W0128 11:22:04.208596   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:04.208604   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:04.208612   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:04.249062   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:04.249076   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:04.261846   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:04.261861   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:04.316608   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:04.316620   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:04.316627   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:04.335048   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:04.335064   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:06.386742   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051668685s)
	I0128 11:22:08.887868   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:09.016244   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:09.040597   22056 logs.go:279] 0 containers: []
	W0128 11:22:09.040610   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:09.040679   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:09.064524   22056 logs.go:279] 0 containers: []
	W0128 11:22:09.064537   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:09.064603   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:09.088510   22056 logs.go:279] 0 containers: []
	W0128 11:22:09.088541   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:09.088614   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:09.112153   22056 logs.go:279] 0 containers: []
	W0128 11:22:09.112166   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:09.112236   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:09.135717   22056 logs.go:279] 0 containers: []
	W0128 11:22:09.135733   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:09.135810   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:09.160402   22056 logs.go:279] 0 containers: []
	W0128 11:22:09.160416   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:09.160486   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:09.185456   22056 logs.go:279] 0 containers: []
	W0128 11:22:09.185468   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:09.185539   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:09.209612   22056 logs.go:279] 0 containers: []
	W0128 11:22:09.209625   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:09.209632   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:09.209640   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:11.260441   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050789998s)
	I0128 11:22:11.260561   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:11.260569   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:11.301615   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:11.301629   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:11.314837   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:11.314859   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:11.373374   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:11.373387   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:11.373395   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:13.890302   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:14.015666   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:14.041114   22056 logs.go:279] 0 containers: []
	W0128 11:22:14.041128   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:14.041213   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:14.064293   22056 logs.go:279] 0 containers: []
	W0128 11:22:14.064306   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:14.064380   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:14.088123   22056 logs.go:279] 0 containers: []
	W0128 11:22:14.088147   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:14.088216   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:14.112666   22056 logs.go:279] 0 containers: []
	W0128 11:22:14.112682   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:14.112760   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:14.137198   22056 logs.go:279] 0 containers: []
	W0128 11:22:14.137212   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:14.137283   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:14.160854   22056 logs.go:279] 0 containers: []
	W0128 11:22:14.160868   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:14.160936   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:14.184888   22056 logs.go:279] 0 containers: []
	W0128 11:22:14.184903   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:14.184975   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:14.209319   22056 logs.go:279] 0 containers: []
	W0128 11:22:14.209333   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:14.209340   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:14.209349   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:14.296781   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:14.296795   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:14.296801   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:14.314214   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:14.314230   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:16.360089   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045849069s)
	I0128 11:22:16.360196   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:16.360203   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:16.398839   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:16.398856   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:18.911680   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:19.015302   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:19.040655   22056 logs.go:279] 0 containers: []
	W0128 11:22:19.040671   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:19.040762   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:19.063645   22056 logs.go:279] 0 containers: []
	W0128 11:22:19.063658   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:19.063733   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:19.086228   22056 logs.go:279] 0 containers: []
	W0128 11:22:19.086240   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:19.086306   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:19.110086   22056 logs.go:279] 0 containers: []
	W0128 11:22:19.110099   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:19.110169   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:19.134110   22056 logs.go:279] 0 containers: []
	W0128 11:22:19.134124   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:19.134196   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:19.159041   22056 logs.go:279] 0 containers: []
	W0128 11:22:19.159053   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:19.159123   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:19.183094   22056 logs.go:279] 0 containers: []
	W0128 11:22:19.183109   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:19.183182   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:19.206206   22056 logs.go:279] 0 containers: []
	W0128 11:22:19.206218   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:19.206225   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:19.206233   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:19.247982   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:19.247996   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:19.260671   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:19.260684   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:19.315878   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:19.315896   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:19.315912   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:19.344379   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:19.344400   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:21.401065   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056651476s)
	I0128 11:22:23.901574   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:24.015511   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:24.039579   22056 logs.go:279] 0 containers: []
	W0128 11:22:24.039592   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:24.039667   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:24.062941   22056 logs.go:279] 0 containers: []
	W0128 11:22:24.062954   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:24.063025   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:24.087166   22056 logs.go:279] 0 containers: []
	W0128 11:22:24.087180   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:24.087248   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:24.110389   22056 logs.go:279] 0 containers: []
	W0128 11:22:24.110403   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:24.110470   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:24.134092   22056 logs.go:279] 0 containers: []
	W0128 11:22:24.134106   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:24.134177   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:24.158862   22056 logs.go:279] 0 containers: []
	W0128 11:22:24.158875   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:24.158944   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:24.182543   22056 logs.go:279] 0 containers: []
	W0128 11:22:24.182557   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:24.182644   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:24.207332   22056 logs.go:279] 0 containers: []
	W0128 11:22:24.207346   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:24.207353   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:24.207364   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:24.223470   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:24.223485   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:26.275977   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052481613s)
	I0128 11:22:26.276086   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:26.276093   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:26.314808   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:26.314821   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:26.327620   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:26.327636   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:26.383651   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:28.883830   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:29.016780   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:29.041977   22056 logs.go:279] 0 containers: []
	W0128 11:22:29.041991   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:29.042063   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:29.065716   22056 logs.go:279] 0 containers: []
	W0128 11:22:29.065730   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:29.065802   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:29.089951   22056 logs.go:279] 0 containers: []
	W0128 11:22:29.089966   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:29.090042   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:29.114249   22056 logs.go:279] 0 containers: []
	W0128 11:22:29.114263   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:29.114332   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:29.138839   22056 logs.go:279] 0 containers: []
	W0128 11:22:29.138856   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:29.138931   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:29.162891   22056 logs.go:279] 0 containers: []
	W0128 11:22:29.162904   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:29.162974   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:29.186616   22056 logs.go:279] 0 containers: []
	W0128 11:22:29.186630   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:29.186707   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:29.212183   22056 logs.go:279] 0 containers: []
	W0128 11:22:29.212198   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:29.212206   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:29.212214   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:29.230449   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:29.230465   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:31.281105   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050629959s)
	I0128 11:22:31.281219   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:31.281226   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:31.322039   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:31.322061   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:31.336804   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:31.336821   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:31.397043   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:33.897217   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:34.017008   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:34.042841   22056 logs.go:279] 0 containers: []
	W0128 11:22:34.042855   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:34.042921   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:34.066830   22056 logs.go:279] 0 containers: []
	W0128 11:22:34.066843   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:34.066912   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:34.091405   22056 logs.go:279] 0 containers: []
	W0128 11:22:34.091419   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:34.091492   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:34.114474   22056 logs.go:279] 0 containers: []
	W0128 11:22:34.114488   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:34.114575   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:34.137846   22056 logs.go:279] 0 containers: []
	W0128 11:22:34.137859   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:34.137930   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:34.161273   22056 logs.go:279] 0 containers: []
	W0128 11:22:34.161287   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:34.161358   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:34.186777   22056 logs.go:279] 0 containers: []
	W0128 11:22:34.186790   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:34.186858   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:34.211597   22056 logs.go:279] 0 containers: []
	W0128 11:22:34.211610   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:34.211618   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:34.211626   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:34.251889   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:34.251905   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:34.264626   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:34.264640   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:34.320239   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:34.320252   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:34.320260   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:34.336624   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:34.336637   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:36.386917   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050270786s)
	I0128 11:22:38.889069   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:39.015706   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:39.039382   22056 logs.go:279] 0 containers: []
	W0128 11:22:39.039395   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:39.039465   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:39.062819   22056 logs.go:279] 0 containers: []
	W0128 11:22:39.062831   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:39.062900   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:39.086271   22056 logs.go:279] 0 containers: []
	W0128 11:22:39.086285   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:39.086356   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:39.110562   22056 logs.go:279] 0 containers: []
	W0128 11:22:39.110577   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:39.110647   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:39.135349   22056 logs.go:279] 0 containers: []
	W0128 11:22:39.135362   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:39.135433   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:39.159559   22056 logs.go:279] 0 containers: []
	W0128 11:22:39.159573   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:39.159645   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:39.183018   22056 logs.go:279] 0 containers: []
	W0128 11:22:39.183031   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:39.183101   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:39.207712   22056 logs.go:279] 0 containers: []
	W0128 11:22:39.207728   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:39.207735   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:39.207768   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:39.264200   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:39.264226   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:39.264232   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:39.280592   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:39.280606   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:41.331294   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050662968s)
	I0128 11:22:41.331409   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:41.331416   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:41.369709   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:41.369725   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:43.882313   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:44.015678   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:44.041647   22056 logs.go:279] 0 containers: []
	W0128 11:22:44.041661   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:44.041736   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:44.065408   22056 logs.go:279] 0 containers: []
	W0128 11:22:44.065422   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:44.065489   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:44.089379   22056 logs.go:279] 0 containers: []
	W0128 11:22:44.089394   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:44.089466   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:44.113608   22056 logs.go:279] 0 containers: []
	W0128 11:22:44.113622   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:44.113711   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:44.136792   22056 logs.go:279] 0 containers: []
	W0128 11:22:44.136805   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:44.136879   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:44.160293   22056 logs.go:279] 0 containers: []
	W0128 11:22:44.160306   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:44.160374   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:44.184876   22056 logs.go:279] 0 containers: []
	W0128 11:22:44.184890   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:44.184962   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:44.211490   22056 logs.go:279] 0 containers: []
	W0128 11:22:44.211504   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:44.211511   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:44.211518   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:44.229118   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:44.229156   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:46.281768   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052601615s)
	I0128 11:22:46.281874   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:46.281881   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:46.320012   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:46.320029   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:46.332482   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:46.332498   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:46.388568   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:48.888662   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:49.016373   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:49.041689   22056 logs.go:279] 0 containers: []
	W0128 11:22:49.041704   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:49.041779   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:49.064967   22056 logs.go:279] 0 containers: []
	W0128 11:22:49.064982   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:49.065049   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:49.087764   22056 logs.go:279] 0 containers: []
	W0128 11:22:49.087777   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:49.087846   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:49.112150   22056 logs.go:279] 0 containers: []
	W0128 11:22:49.112163   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:49.112224   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:49.135559   22056 logs.go:279] 0 containers: []
	W0128 11:22:49.135573   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:49.135641   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:49.160002   22056 logs.go:279] 0 containers: []
	W0128 11:22:49.160016   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:49.160083   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:49.184308   22056 logs.go:279] 0 containers: []
	W0128 11:22:49.184322   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:49.184399   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:49.209203   22056 logs.go:279] 0 containers: []
	W0128 11:22:49.209218   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:49.209224   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:49.209231   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:49.226148   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:49.226181   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:51.275406   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049215731s)
	I0128 11:22:51.275518   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:51.275525   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:51.314743   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:51.314758   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:51.327882   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:51.327896   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:51.384692   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:53.885853   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:54.015027   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:54.040483   22056 logs.go:279] 0 containers: []
	W0128 11:22:54.040497   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:54.040566   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:54.063451   22056 logs.go:279] 0 containers: []
	W0128 11:22:54.063464   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:54.063529   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:54.086554   22056 logs.go:279] 0 containers: []
	W0128 11:22:54.086568   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:54.086637   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:54.111176   22056 logs.go:279] 0 containers: []
	W0128 11:22:54.111190   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:54.111265   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:54.135379   22056 logs.go:279] 0 containers: []
	W0128 11:22:54.135391   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:54.135458   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:54.158318   22056 logs.go:279] 0 containers: []
	W0128 11:22:54.158332   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:54.158404   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:54.182039   22056 logs.go:279] 0 containers: []
	W0128 11:22:54.182055   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:54.182133   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:54.206152   22056 logs.go:279] 0 containers: []
	W0128 11:22:54.206164   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:54.206171   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:54.206178   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:56.257292   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05110473s)
	I0128 11:22:56.257402   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:56.257410   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:56.296910   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:56.296925   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:56.309707   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:56.309722   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:56.365529   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:56.365540   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:56.365547   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:58.882587   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:22:59.015623   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:59.039573   22056 logs.go:279] 0 containers: []
	W0128 11:22:59.039587   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:59.039674   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:59.062598   22056 logs.go:279] 0 containers: []
	W0128 11:22:59.062613   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:22:59.062695   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:59.085904   22056 logs.go:279] 0 containers: []
	W0128 11:22:59.085916   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:22:59.085983   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:59.110933   22056 logs.go:279] 0 containers: []
	W0128 11:22:59.110946   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:59.111016   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:59.134765   22056 logs.go:279] 0 containers: []
	W0128 11:22:59.134778   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:59.134851   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:59.158179   22056 logs.go:279] 0 containers: []
	W0128 11:22:59.158193   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:59.158265   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:59.182088   22056 logs.go:279] 0 containers: []
	W0128 11:22:59.182103   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:59.182178   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:59.207332   22056 logs.go:279] 0 containers: []
	W0128 11:22:59.207346   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:59.207353   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:59.207361   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:59.249984   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:59.250003   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:59.263186   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:59.263201   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:59.331970   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:59.331982   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:59.331989   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:22:59.347615   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:22:59.347627   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:01.395875   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048239208s)
	I0128 11:23:03.896228   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:04.017140   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:04.043046   22056 logs.go:279] 0 containers: []
	W0128 11:23:04.043059   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:04.043128   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:04.067288   22056 logs.go:279] 0 containers: []
	W0128 11:23:04.067300   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:04.067373   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:04.091414   22056 logs.go:279] 0 containers: []
	W0128 11:23:04.091428   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:04.091503   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:04.115599   22056 logs.go:279] 0 containers: []
	W0128 11:23:04.115612   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:04.115685   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:04.139640   22056 logs.go:279] 0 containers: []
	W0128 11:23:04.139657   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:04.139744   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:04.163157   22056 logs.go:279] 0 containers: []
	W0128 11:23:04.163170   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:04.163242   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:04.186960   22056 logs.go:279] 0 containers: []
	W0128 11:23:04.186974   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:04.187046   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:04.210608   22056 logs.go:279] 0 containers: []
	W0128 11:23:04.210621   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:04.210630   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:04.210641   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:04.266710   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:04.266723   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:04.266730   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:04.283471   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:04.283484   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:06.334668   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051174965s)
	I0128 11:23:06.334781   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:06.334789   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:06.372694   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:06.372707   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:08.885907   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:09.015768   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:09.039361   22056 logs.go:279] 0 containers: []
	W0128 11:23:09.039374   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:09.039442   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:09.063290   22056 logs.go:279] 0 containers: []
	W0128 11:23:09.063303   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:09.063376   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:09.086496   22056 logs.go:279] 0 containers: []
	W0128 11:23:09.086510   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:09.086577   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:09.110732   22056 logs.go:279] 0 containers: []
	W0128 11:23:09.110745   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:09.110815   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:09.134250   22056 logs.go:279] 0 containers: []
	W0128 11:23:09.134265   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:09.134336   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:09.158089   22056 logs.go:279] 0 containers: []
	W0128 11:23:09.158102   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:09.158168   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:09.182746   22056 logs.go:279] 0 containers: []
	W0128 11:23:09.182759   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:09.182827   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:09.206334   22056 logs.go:279] 0 containers: []
	W0128 11:23:09.206349   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:09.206357   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:09.206364   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:09.247166   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:09.247181   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:09.259702   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:09.259715   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:09.316026   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:09.316036   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:09.316042   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:09.332287   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:09.332302   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:11.383753   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051441792s)
	I0128 11:23:13.885879   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:14.015116   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:14.048206   22056 logs.go:279] 0 containers: []
	W0128 11:23:14.048221   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:14.048303   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:14.071324   22056 logs.go:279] 0 containers: []
	W0128 11:23:14.071352   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:14.071423   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:14.095810   22056 logs.go:279] 0 containers: []
	W0128 11:23:14.095825   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:14.095895   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:14.120203   22056 logs.go:279] 0 containers: []
	W0128 11:23:14.120218   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:14.120303   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:14.143677   22056 logs.go:279] 0 containers: []
	W0128 11:23:14.143693   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:14.143764   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:14.167392   22056 logs.go:279] 0 containers: []
	W0128 11:23:14.167404   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:14.167477   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:14.193445   22056 logs.go:279] 0 containers: []
	W0128 11:23:14.193458   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:14.193526   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:14.220607   22056 logs.go:279] 0 containers: []
	W0128 11:23:14.220623   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:14.220631   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:14.220638   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:14.234809   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:14.234823   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:14.315101   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:14.315111   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:14.315118   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:14.331406   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:14.331419   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:16.383160   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051732266s)
	I0128 11:23:16.383273   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:16.383281   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:18.922006   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:19.017238   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:19.042224   22056 logs.go:279] 0 containers: []
	W0128 11:23:19.042238   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:19.042309   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:19.065529   22056 logs.go:279] 0 containers: []
	W0128 11:23:19.065542   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:19.065610   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:19.089579   22056 logs.go:279] 0 containers: []
	W0128 11:23:19.089593   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:19.089687   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:19.114062   22056 logs.go:279] 0 containers: []
	W0128 11:23:19.114075   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:19.114148   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:19.137250   22056 logs.go:279] 0 containers: []
	W0128 11:23:19.137264   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:19.137332   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:19.161529   22056 logs.go:279] 0 containers: []
	W0128 11:23:19.161542   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:19.161638   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:19.186567   22056 logs.go:279] 0 containers: []
	W0128 11:23:19.186580   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:19.186648   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:19.209622   22056 logs.go:279] 0 containers: []
	W0128 11:23:19.209639   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:19.209654   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:19.209664   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:19.222495   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:19.222520   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:19.280274   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:19.280285   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:19.280292   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:19.296828   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:19.296843   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:21.343786   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046933153s)
	I0128 11:23:21.343897   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:21.343903   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:23.884866   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:24.015747   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:24.040447   22056 logs.go:279] 0 containers: []
	W0128 11:23:24.040462   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:24.040583   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:24.065528   22056 logs.go:279] 0 containers: []
	W0128 11:23:24.065542   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:24.065618   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:24.090896   22056 logs.go:279] 0 containers: []
	W0128 11:23:24.090912   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:24.090984   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:24.113925   22056 logs.go:279] 0 containers: []
	W0128 11:23:24.113940   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:24.114012   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:24.137923   22056 logs.go:279] 0 containers: []
	W0128 11:23:24.137936   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:24.138004   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:24.162916   22056 logs.go:279] 0 containers: []
	W0128 11:23:24.162931   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:24.163002   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:24.186143   22056 logs.go:279] 0 containers: []
	W0128 11:23:24.186155   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:24.186236   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:24.209457   22056 logs.go:279] 0 containers: []
	W0128 11:23:24.209470   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:24.209478   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:24.209485   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:24.249724   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:24.249738   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:24.262494   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:24.262508   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:24.320578   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:24.320595   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:24.320604   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:24.337754   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:24.337768   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:26.388566   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050786192s)
	I0128 11:23:28.888979   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:29.015914   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:29.042620   22056 logs.go:279] 0 containers: []
	W0128 11:23:29.042635   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:29.042707   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:29.064954   22056 logs.go:279] 0 containers: []
	W0128 11:23:29.064968   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:29.065038   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:29.088740   22056 logs.go:279] 0 containers: []
	W0128 11:23:29.088754   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:29.088826   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:29.112145   22056 logs.go:279] 0 containers: []
	W0128 11:23:29.112162   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:29.112232   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:29.134924   22056 logs.go:279] 0 containers: []
	W0128 11:23:29.134937   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:29.135007   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:29.159255   22056 logs.go:279] 0 containers: []
	W0128 11:23:29.159267   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:29.159352   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:29.184847   22056 logs.go:279] 0 containers: []
	W0128 11:23:29.184862   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:29.184944   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:29.212277   22056 logs.go:279] 0 containers: []
	W0128 11:23:29.212291   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:29.212300   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:29.212309   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:29.225978   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:29.225996   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:29.305620   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:29.305640   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:29.305650   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:29.322345   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:29.322359   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:31.372237   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049869374s)
	I0128 11:23:31.372350   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:31.372359   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:33.912218   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:34.015505   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:34.041072   22056 logs.go:279] 0 containers: []
	W0128 11:23:34.041086   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:34.041154   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:34.064246   22056 logs.go:279] 0 containers: []
	W0128 11:23:34.064263   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:34.064337   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:34.089018   22056 logs.go:279] 0 containers: []
	W0128 11:23:34.089031   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:34.089101   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:34.113170   22056 logs.go:279] 0 containers: []
	W0128 11:23:34.113185   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:34.113254   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:34.136794   22056 logs.go:279] 0 containers: []
	W0128 11:23:34.136810   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:34.136898   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:34.160316   22056 logs.go:279] 0 containers: []
	W0128 11:23:34.160331   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:34.160401   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:34.183726   22056 logs.go:279] 0 containers: []
	W0128 11:23:34.183738   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:34.183810   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:34.208644   22056 logs.go:279] 0 containers: []
	W0128 11:23:34.208658   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:34.208666   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:34.208674   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:34.248183   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:34.248196   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:34.261046   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:34.261059   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:34.317405   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:34.317416   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:34.317443   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:34.333692   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:34.333706   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:36.380977   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047260665s)
	I0128 11:23:38.881397   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:39.015098   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:39.040207   22056 logs.go:279] 0 containers: []
	W0128 11:23:39.040221   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:39.040292   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:39.063028   22056 logs.go:279] 0 containers: []
	W0128 11:23:39.063040   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:39.063107   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:39.085862   22056 logs.go:279] 0 containers: []
	W0128 11:23:39.085876   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:39.085943   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:39.112709   22056 logs.go:279] 0 containers: []
	W0128 11:23:39.112724   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:39.112792   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:39.137242   22056 logs.go:279] 0 containers: []
	W0128 11:23:39.137258   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:39.137328   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:39.161868   22056 logs.go:279] 0 containers: []
	W0128 11:23:39.161882   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:39.161951   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:39.186145   22056 logs.go:279] 0 containers: []
	W0128 11:23:39.186159   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:39.186229   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:39.209923   22056 logs.go:279] 0 containers: []
	W0128 11:23:39.209936   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:39.209942   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:39.209949   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:39.227570   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:39.227585   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:41.276961   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049366544s)
	I0128 11:23:41.277069   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:41.277076   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:41.316402   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:41.316419   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:41.329368   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:41.329382   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:41.385691   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:43.886737   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:44.015128   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:44.038809   22056 logs.go:279] 0 containers: []
	W0128 11:23:44.038827   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:44.038905   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:44.062622   22056 logs.go:279] 0 containers: []
	W0128 11:23:44.062644   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:44.062716   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:44.087229   22056 logs.go:279] 0 containers: []
	W0128 11:23:44.087242   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:44.087310   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:44.109952   22056 logs.go:279] 0 containers: []
	W0128 11:23:44.109966   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:44.110037   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:44.135534   22056 logs.go:279] 0 containers: []
	W0128 11:23:44.135547   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:44.135613   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:44.158648   22056 logs.go:279] 0 containers: []
	W0128 11:23:44.158661   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:44.158730   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:44.182973   22056 logs.go:279] 0 containers: []
	W0128 11:23:44.183007   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:44.183114   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:44.209650   22056 logs.go:279] 0 containers: []
	W0128 11:23:44.209669   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:44.209678   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:44.209689   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:44.251998   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:44.252017   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:44.265456   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:44.265471   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:44.321875   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:44.321886   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:44.321893   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:44.338092   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:44.338104   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:46.387970   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049856096s)
	I0128 11:23:48.889027   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:49.015487   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:23:49.041155   22056 logs.go:279] 0 containers: []
	W0128 11:23:49.041170   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:23:49.041239   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:23:49.064443   22056 logs.go:279] 0 containers: []
	W0128 11:23:49.064456   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:23:49.064524   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:23:49.088343   22056 logs.go:279] 0 containers: []
	W0128 11:23:49.088357   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:23:49.088428   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:23:49.112462   22056 logs.go:279] 0 containers: []
	W0128 11:23:49.112475   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:23:49.112543   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:23:49.136359   22056 logs.go:279] 0 containers: []
	W0128 11:23:49.136373   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:23:49.136461   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:23:49.160737   22056 logs.go:279] 0 containers: []
	W0128 11:23:49.160750   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:23:49.160823   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:23:49.184856   22056 logs.go:279] 0 containers: []
	W0128 11:23:49.184868   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:23:49.184939   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:23:49.207772   22056 logs.go:279] 0 containers: []
	W0128 11:23:49.207786   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:23:49.207792   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:23:49.207800   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:23:49.250244   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:23:49.250277   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:23:49.264324   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:23:49.264339   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:23:49.322946   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:23:49.322971   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:23:49.322978   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:23:49.339491   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:23:49.339504   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:23:51.391227   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051713742s)
	I0128 11:23:53.891628   22056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:23:54.015089   22056 kubeadm.go:637] restartCluster took 4m10.951458007s
	W0128 11:23:54.015187   22056 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0128 11:23:54.015207   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:23:54.429852   22056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:23:54.439771   22056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:23:54.448255   22056 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:23:54.448301   22056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:23:54.456526   22056 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:23:54.456553   22056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:23:54.504553   22056 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:23:54.505022   22056 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:23:54.811802   22056 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:23:54.811889   22056 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:23:54.811974   22056 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:23:55.046703   22056 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:23:55.047687   22056 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:23:55.054283   22056 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:23:55.121591   22056 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:23:55.143242   22056 out.go:204]   - Generating certificates and keys ...
	I0128 11:23:55.143347   22056 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:23:55.143400   22056 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:23:55.143479   22056 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:23:55.143532   22056 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:23:55.143616   22056 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:23:55.143671   22056 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:23:55.143754   22056 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:23:55.143833   22056 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:23:55.143922   22056 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:23:55.143988   22056 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:23:55.144043   22056 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:23:55.144125   22056 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:23:55.368911   22056 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:23:55.579243   22056 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:23:55.680953   22056 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:23:55.801235   22056 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:23:55.802217   22056 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:23:55.844684   22056 out.go:204]   - Booting up control plane ...
	I0128 11:23:55.844899   22056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:23:55.845068   22056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:23:55.845212   22056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:23:55.845373   22056 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:23:55.845637   22056 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:24:35.811852   22056 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:24:35.812431   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:24:35.812683   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:24:40.814135   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:24:40.814354   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:24:50.816146   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:24:50.816356   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:25:10.817751   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:25:10.817990   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:25:50.818918   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:25:50.819153   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:25:50.819172   22056 kubeadm.go:322] 
	I0128 11:25:50.819221   22056 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:25:50.819271   22056 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:25:50.819281   22056 kubeadm.go:322] 
	I0128 11:25:50.819316   22056 kubeadm.go:322] This error is likely caused by:
	I0128 11:25:50.819354   22056 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:25:50.819517   22056 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:25:50.819544   22056 kubeadm.go:322] 
	I0128 11:25:50.819692   22056 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:25:50.819720   22056 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:25:50.819744   22056 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:25:50.819751   22056 kubeadm.go:322] 
	I0128 11:25:50.819838   22056 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:25:50.819911   22056 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:25:50.819988   22056 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:25:50.820037   22056 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:25:50.820094   22056 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:25:50.820126   22056 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:25:50.822107   22056 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:25:50.822170   22056 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:25:50.822268   22056 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:25:50.822359   22056 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:25:50.822444   22056 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:25:50.822532   22056 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 11:25:50.822645   22056 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 11:25:50.822675   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:25:51.237903   22056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:25:51.247955   22056 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:25:51.248009   22056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:25:51.255700   22056 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:25:51.255718   22056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:25:51.301701   22056 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:25:51.301743   22056 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:25:51.608280   22056 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:25:51.608364   22056 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:25:51.608459   22056 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:25:51.849326   22056 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:25:51.851398   22056 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:25:51.859268   22056 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:25:51.943006   22056 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:25:51.964616   22056 out.go:204]   - Generating certificates and keys ...
	I0128 11:25:51.964712   22056 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:25:51.964792   22056 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:25:51.964895   22056 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:25:51.964969   22056 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:25:51.965050   22056 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:25:51.965107   22056 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:25:51.965218   22056 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:25:51.965296   22056 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:25:51.965376   22056 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:25:51.965463   22056 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:25:51.965499   22056 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:25:51.965571   22056 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:25:52.126987   22056 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:25:52.205569   22056 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:25:52.651210   22056 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:25:52.722053   22056 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:25:52.722659   22056 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:25:52.744153   22056 out.go:204]   - Booting up control plane ...
	I0128 11:25:52.744386   22056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:25:52.744500   22056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:25:52.744617   22056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:25:52.744770   22056 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:25:52.745074   22056 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:26:32.732745   22056 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:26:32.733442   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:26:32.733682   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:26:37.733840   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:26:37.734017   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:26:47.735385   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:26:47.735526   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:27:07.735971   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:27:07.736141   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:27:47.737949   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:27:47.738172   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:27:47.738188   22056 kubeadm.go:322] 
	I0128 11:27:47.738238   22056 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:27:47.738279   22056 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:27:47.738284   22056 kubeadm.go:322] 
	I0128 11:27:47.738364   22056 kubeadm.go:322] This error is likely caused by:
	I0128 11:27:47.738417   22056 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:27:47.738551   22056 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:27:47.738568   22056 kubeadm.go:322] 
	I0128 11:27:47.738688   22056 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:27:47.738728   22056 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:27:47.738764   22056 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:27:47.738769   22056 kubeadm.go:322] 
	I0128 11:27:47.738891   22056 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:27:47.738995   22056 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:27:47.739112   22056 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:27:47.739180   22056 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:27:47.739265   22056 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:27:47.739307   22056 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:27:47.741658   22056 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:27:47.741725   22056 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:27:47.741828   22056 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:27:47.741917   22056 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:27:47.741984   22056 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:27:47.742053   22056 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 11:27:47.742072   22056 kubeadm.go:403] StartCluster complete in 8m4.712143834s
	I0128 11:27:47.742162   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:27:47.765882   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.765895   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:27:47.765967   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:27:47.789785   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.789799   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:27:47.789871   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:27:47.814163   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.814176   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:27:47.814242   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:27:47.875047   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.875060   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:27:47.875129   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:27:47.898104   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.898117   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:27:47.898202   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:27:47.922817   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.922832   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:27:47.922901   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:27:47.949939   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.949954   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:27:47.950024   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:27:47.975256   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.975269   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:27:47.975277   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:27:47.975284   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:27:47.994111   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:27:47.994126   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:27:48.051842   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:27:48.051853   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:27:48.051859   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:27:48.068182   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:27:48.068196   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:27:50.120577   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052372173s)
	I0128 11:27:50.120688   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:27:50.120697   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0128 11:27:50.159662   22056 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 11:27:50.159684   22056 out.go:239] * 
	* 
	W0128 11:27:50.159792   22056 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:27:50.159833   22056 out.go:239] * 
	* 
	W0128 11:27:50.160477   22056 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 11:27:50.223100   22056 out.go:177] 
	W0128 11:27:50.281364   22056 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:27:50.281563   22056 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 11:27:50.281635   22056 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 11:27:50.355023   22056 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-867000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-867000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-867000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4",
	        "Created": "2023-01-28T19:14:00.935880886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307380,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:19:38.982033396Z",
	            "FinishedAt": "2023-01-28T19:19:35.984970564Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hosts",
	        "LogPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4-json.log",
	        "Name": "/old-k8s-version-867000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-867000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-867000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-867000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-867000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-867000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56bc2e5c762ee218e9cc648a942743397f45d38fe7e80bb7ebfa5abcf2ee1586",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55320"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55321"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55322"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55323"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55319"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/56bc2e5c762e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-867000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fd303c5a473",
	                        "old-k8s-version-867000"
	                    ],
	                    "NetworkID": "05da8fabe29d00d6e3eb58e11e2bbe3932ea7f3d437268a555d06945d4a9c8c9",
	                    "EndpointID": "fc33025b57ea548e3024d3d6addb6d5cbf64cfd4291900853273d019fcc07246",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 2 (422.657488ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-867000 logs -n 25
E0128 11:27:54.483070    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-867000 logs -n 25: (3.653766339s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-360000 sudo                            | kubenet-360000         | jenkins | v1.29.0 | 28 Jan 23 11:15 PST | 28 Jan 23 11:15 PST |
	|         | containerd config dump                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-360000 sudo                            | kubenet-360000         | jenkins | v1.29.0 | 28 Jan 23 11:15 PST |                     |
	|         | systemctl status crio --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-360000 sudo                            | kubenet-360000         | jenkins | v1.29.0 | 28 Jan 23 11:15 PST | 28 Jan 23 11:15 PST |
	|         | systemctl cat crio --no-pager                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-360000 sudo find                       | kubenet-360000         | jenkins | v1.29.0 | 28 Jan 23 11:15 PST | 28 Jan 23 11:15 PST |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-360000 sudo crio                       | kubenet-360000         | jenkins | v1.29.0 | 28 Jan 23 11:15 PST | 28 Jan 23 11:15 PST |
	|         | config                                            |                        |         |         |                     |                     |
	| delete  | -p kubenet-360000                                 | kubenet-360000         | jenkins | v1.29.0 | 28 Jan 23 11:15 PST | 28 Jan 23 11:15 PST |
	| start   | -p no-preload-625000                              | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:15 PST | 28 Jan 23 11:16 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-625000        | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:16 PST | 28 Jan 23 11:16 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-625000                              | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:16 PST | 28 Jan 23 11:16 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625000             | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:16 PST | 28 Jan 23 11:16 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-625000                              | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:16 PST | 28 Jan 23 11:25 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-867000   | old-k8s-version-867000 | jenkins | v1.29.0 | 28 Jan 23 11:18 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-867000                         | old-k8s-version-867000 | jenkins | v1.29.0 | 28 Jan 23 11:19 PST | 28 Jan 23 11:19 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867000        | old-k8s-version-867000 | jenkins | v1.29.0 | 28 Jan 23 11:19 PST | 28 Jan 23 11:19 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-867000                         | old-k8s-version-867000 | jenkins | v1.29.0 | 28 Jan 23 11:19 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-625000 sudo                         | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-625000                              | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-625000                              | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-625000                              | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	| delete  | -p no-preload-625000                              | no-preload-625000      | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	| start   | -p embed-certs-724000                             | embed-certs-724000     | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-724000       | embed-certs-724000     | jenkins | v1.29.0 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-724000                             | embed-certs-724000     | jenkins | v1.29.0 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-724000            | embed-certs-724000     | jenkins | v1.29.0 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-724000                             | embed-certs-724000     | jenkins | v1.29.0 | 28 Jan 23 11:27 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 11:27:18
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 11:27:18.612714   22829 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:27:18.612988   22829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:27:18.612993   22829 out.go:309] Setting ErrFile to fd 2...
	I0128 11:27:18.612997   22829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:27:18.613120   22829 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 11:27:18.613597   22829 out.go:303] Setting JSON to false
	I0128 11:27:18.632430   22829 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5213,"bootTime":1674928825,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 11:27:18.632518   22829 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:27:18.654448   22829 out.go:177] * [embed-certs-724000] minikube v1.29.0 on Darwin 13.2
	I0128 11:27:18.696673   22829 notify.go:220] Checking for updates...
	I0128 11:27:18.717748   22829 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:27:18.759584   22829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:27:18.780991   22829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:27:18.802084   22829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:27:18.823924   22829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 11:27:18.846132   22829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:27:18.868736   22829 config.go:180] Loaded profile config "embed-certs-724000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:27:18.869413   22829 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:27:18.931519   22829 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:27:18.931685   22829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:27:19.076433   22829 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 19:27:18.983045117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:27:19.118766   22829 out.go:177] * Using the docker driver based on existing profile
	I0128 11:27:19.139840   22829 start.go:296] selected driver: docker
	I0128 11:27:19.139874   22829 start.go:857] validating driver "docker" against &{Name:embed-certs-724000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-724000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:27:19.140018   22829 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:27:19.144067   22829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:27:19.289141   22829 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 19:27:19.194834368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:27:19.289284   22829 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 11:27:19.289304   22829 cni.go:84] Creating CNI manager for ""
	I0128 11:27:19.289316   22829 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:27:19.289342   22829 start_flags.go:319] config:
	{Name:embed-certs-724000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-724000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:27:19.333145   22829 out.go:177] * Starting control plane node embed-certs-724000 in cluster embed-certs-724000
	I0128 11:27:19.355112   22829 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:27:19.375759   22829 out.go:177] * Pulling base image ...
	I0128 11:27:19.418065   22829 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:27:19.418145   22829 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:27:19.418154   22829 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 11:27:19.418178   22829 cache.go:57] Caching tarball of preloaded images
	I0128 11:27:19.418397   22829 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:27:19.418419   22829 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 11:27:19.419456   22829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/config.json ...
	I0128 11:27:19.480624   22829 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:27:19.480642   22829 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:27:19.480674   22829 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:27:19.480719   22829 start.go:364] acquiring machines lock for embed-certs-724000: {Name:mk53afa6fe17ac4d5a98e97a36699dc42748b024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:27:19.480810   22829 start.go:368] acquired machines lock for "embed-certs-724000" in 70.66µs
	I0128 11:27:19.480835   22829 start.go:96] Skipping create...Using existing machine configuration
	I0128 11:27:19.480845   22829 fix.go:55] fixHost starting: 
	I0128 11:27:19.481104   22829 cli_runner.go:164] Run: docker container inspect embed-certs-724000 --format={{.State.Status}}
	I0128 11:27:19.541105   22829 fix.go:103] recreateIfNeeded on embed-certs-724000: state=Stopped err=<nil>
	W0128 11:27:19.541156   22829 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 11:27:19.563106   22829 out.go:177] * Restarting existing docker container for "embed-certs-724000" ...
	I0128 11:27:19.585028   22829 cli_runner.go:164] Run: docker start embed-certs-724000
	I0128 11:27:19.936963   22829 cli_runner.go:164] Run: docker container inspect embed-certs-724000 --format={{.State.Status}}
	I0128 11:27:19.999526   22829 kic.go:426] container "embed-certs-724000" state is running.
	I0128 11:27:20.000129   22829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-724000
	I0128 11:27:20.063728   22829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/config.json ...
	I0128 11:27:20.064197   22829 machine.go:88] provisioning docker machine ...
	I0128 11:27:20.064220   22829 ubuntu.go:169] provisioning hostname "embed-certs-724000"
	I0128 11:27:20.064293   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:20.135306   22829 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:20.135519   22829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55496 <nil> <nil>}
	I0128 11:27:20.135533   22829 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-724000 && echo "embed-certs-724000" | sudo tee /etc/hostname
	I0128 11:27:20.284525   22829 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-724000
	
	I0128 11:27:20.284636   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:20.347526   22829 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:20.347693   22829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55496 <nil> <nil>}
	I0128 11:27:20.347707   22829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-724000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-724000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-724000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:27:20.483348   22829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:27:20.483376   22829 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2556/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2556/.minikube}
	I0128 11:27:20.483394   22829 ubuntu.go:177] setting up certificates
	I0128 11:27:20.483403   22829 provision.go:83] configureAuth start
	I0128 11:27:20.483485   22829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-724000
	I0128 11:27:20.543710   22829 provision.go:138] copyHostCerts
	I0128 11:27:20.543808   22829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem, removing ...
	I0128 11:27:20.543817   22829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 11:27:20.543912   22829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem (1679 bytes)
	I0128 11:27:20.544119   22829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem, removing ...
	I0128 11:27:20.544126   22829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 11:27:20.544185   22829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem (1082 bytes)
	I0128 11:27:20.544330   22829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem, removing ...
	I0128 11:27:20.544336   22829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 11:27:20.544392   22829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem (1123 bytes)
	I0128 11:27:20.544516   22829 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem org=jenkins.embed-certs-724000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-724000]
	I0128 11:27:20.921100   22829 provision.go:172] copyRemoteCerts
	I0128 11:27:20.921166   22829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:27:20.921219   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:20.984193   22829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55496 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/embed-certs-724000/id_rsa Username:docker}
	I0128 11:27:21.079987   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:27:21.097656   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0128 11:27:21.115175   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 11:27:21.132678   22829 provision.go:86] duration metric: configureAuth took 649.259161ms
	I0128 11:27:21.132693   22829 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:27:21.132859   22829 config.go:180] Loaded profile config "embed-certs-724000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:27:21.132941   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:21.192537   22829 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:21.192684   22829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55496 <nil> <nil>}
	I0128 11:27:21.192692   22829 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:27:21.328369   22829 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:27:21.328386   22829 ubuntu.go:71] root file system type: overlay
	I0128 11:27:21.328587   22829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:27:21.328677   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:21.387345   22829 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:21.387504   22829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55496 <nil> <nil>}
	I0128 11:27:21.387592   22829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:27:21.530646   22829 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:27:21.530755   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:21.589861   22829 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:21.590027   22829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55496 <nil> <nil>}
	I0128 11:27:21.590040   22829 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:27:21.726012   22829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:27:21.726027   22829 machine.go:91] provisioned docker machine in 1.661823884s
	I0128 11:27:21.726035   22829 start.go:300] post-start starting for "embed-certs-724000" (driver="docker")
	I0128 11:27:21.726040   22829 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:27:21.726117   22829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:27:21.726172   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:21.787263   22829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55496 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/embed-certs-724000/id_rsa Username:docker}
	I0128 11:27:21.884163   22829 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:27:21.887923   22829 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:27:21.887940   22829 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:27:21.887947   22829 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:27:21.887953   22829 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:27:21.887961   22829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/addons for local assets ...
	I0128 11:27:21.888047   22829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/files for local assets ...
	I0128 11:27:21.888198   22829 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> 38492.pem in /etc/ssl/certs
	I0128 11:27:21.888387   22829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:27:21.895939   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:27:21.913729   22829 start.go:303] post-start completed in 187.684225ms
	I0128 11:27:21.913891   22829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:27:21.913965   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:21.975472   22829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55496 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/embed-certs-724000/id_rsa Username:docker}
	I0128 11:27:22.068901   22829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:27:22.073633   22829 fix.go:57] fixHost completed within 2.592790413s
	I0128 11:27:22.073645   22829 start.go:83] releasing machines lock for "embed-certs-724000", held for 2.592830171s
	I0128 11:27:22.073731   22829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-724000
	I0128 11:27:22.134145   22829 ssh_runner.go:195] Run: cat /version.json
	I0128 11:27:22.134150   22829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:27:22.134225   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:22.134231   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:22.197932   22829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55496 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/embed-certs-724000/id_rsa Username:docker}
	I0128 11:27:22.198060   22829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55496 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/embed-certs-724000/id_rsa Username:docker}
	I0128 11:27:22.289339   22829 ssh_runner.go:195] Run: systemctl --version
	I0128 11:27:22.347745   22829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:27:22.353050   22829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:27:22.369168   22829 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:27:22.369273   22829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:27:22.377481   22829 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:27:22.390407   22829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 11:27:22.397958   22829 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0128 11:27:22.397975   22829 start.go:483] detecting cgroup driver to use...
	I0128 11:27:22.397991   22829 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:27:22.398075   22829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:27:22.411753   22829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:27:22.420533   22829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:27:22.429628   22829 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:27:22.429706   22829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:27:22.439040   22829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:27:22.448742   22829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:27:22.457942   22829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:27:22.466930   22829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:27:22.475806   22829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:27:22.485211   22829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:27:22.492776   22829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:27:22.500922   22829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:22.587566   22829 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:27:22.665867   22829 start.go:483] detecting cgroup driver to use...
	I0128 11:27:22.665886   22829 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:27:22.665949   22829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:27:22.677140   22829 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:27:22.677212   22829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:27:22.689113   22829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:27:22.703718   22829 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:27:22.817106   22829 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:27:22.881159   22829 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:27:22.881175   22829 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:27:22.895299   22829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:22.996003   22829 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:27:23.300002   22829 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:27:23.371552   22829 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:27:23.444962   22829 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:27:23.517152   22829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:23.588400   22829 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:27:23.612167   22829 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:27:23.612261   22829 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:27:23.616572   22829 start.go:551] Will wait 60s for crictl version
	I0128 11:27:23.637581   22829 ssh_runner.go:195] Run: which crictl
	I0128 11:27:23.643129   22829 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:27:23.756028   22829 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:27:23.756108   22829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:27:23.786641   22829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:27:23.857851   22829 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:27:23.858047   22829 cli_runner.go:164] Run: docker exec -t embed-certs-724000 dig +short host.docker.internal
	I0128 11:27:23.978909   22829 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:27:23.979022   22829 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:27:23.983887   22829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:27:23.994592   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:24.055717   22829 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:27:24.055797   22829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:27:24.080591   22829 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0128 11:27:24.080610   22829 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:27:24.080700   22829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:27:24.105345   22829 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0128 11:27:24.105372   22829 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:27:24.105464   22829 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:27:24.178154   22829 cni.go:84] Creating CNI manager for ""
	I0128 11:27:24.178173   22829 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:27:24.178190   22829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:27:24.178208   22829 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-724000 NodeName:embed-certs-724000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:27:24.178338   22829 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-724000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:27:24.178415   22829 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-724000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-724000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:27:24.178484   22829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:27:24.186881   22829 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:27:24.186943   22829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:27:24.194377   22829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0128 11:27:24.208858   22829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:27:24.222705   22829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0128 11:27:24.236218   22829 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:27:24.240180   22829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:27:24.250204   22829 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000 for IP: 192.168.67.2
	I0128 11:27:24.250221   22829 certs.go:186] acquiring lock for shared ca certs: {Name:mkee0a6d4b79657122da9b64494daa75cd779ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:24.250437   22829 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key
	I0128 11:27:24.250514   22829 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key
	I0128 11:27:24.250681   22829 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/client.key
	I0128 11:27:24.250771   22829 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/apiserver.key.c7fa3a9e
	I0128 11:27:24.250824   22829 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/proxy-client.key
	I0128 11:27:24.251068   22829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem (1338 bytes)
	W0128 11:27:24.251127   22829 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849_empty.pem, impossibly tiny 0 bytes
	I0128 11:27:24.251152   22829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 11:27:24.251206   22829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:27:24.251281   22829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:27:24.251347   22829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem (1679 bytes)
	I0128 11:27:24.251476   22829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:27:24.252104   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:27:24.270568   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 11:27:24.288048   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:27:24.305514   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/embed-certs-724000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:27:24.323220   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:27:24.341709   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 11:27:24.359423   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:27:24.377979   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 11:27:24.396100   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:27:24.414092   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem --> /usr/share/ca-certificates/3849.pem (1338 bytes)
	I0128 11:27:24.432361   22829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /usr/share/ca-certificates/38492.pem (1708 bytes)
	I0128 11:27:24.450848   22829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 11:27:24.464647   22829 ssh_runner.go:195] Run: openssl version
	I0128 11:27:24.470538   22829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3849.pem && ln -fs /usr/share/ca-certificates/3849.pem /etc/ssl/certs/3849.pem"
	I0128 11:27:24.479149   22829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3849.pem
	I0128 11:27:24.483270   22829 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:26 /usr/share/ca-certificates/3849.pem
	I0128 11:27:24.483320   22829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3849.pem
	I0128 11:27:24.489161   22829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3849.pem /etc/ssl/certs/51391683.0"
	I0128 11:27:24.496850   22829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38492.pem && ln -fs /usr/share/ca-certificates/38492.pem /etc/ssl/certs/38492.pem"
	I0128 11:27:24.505133   22829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38492.pem
	I0128 11:27:24.509400   22829 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:26 /usr/share/ca-certificates/38492.pem
	I0128 11:27:24.509450   22829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38492.pem
	I0128 11:27:24.515192   22829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38492.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:27:24.523190   22829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:27:24.531459   22829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:24.536193   22829 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:24.536248   22829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:24.541967   22829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:27:24.549786   22829 kubeadm.go:401] StartCluster: {Name:embed-certs-724000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-724000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:27:24.549888   22829 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:27:24.574003   22829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:27:24.582544   22829 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:27:24.582558   22829 kubeadm.go:633] restartCluster start
	I0128 11:27:24.582615   22829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:27:24.589746   22829 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:24.589832   22829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-724000
	I0128 11:27:24.652364   22829 kubeconfig.go:135] verify returned: extract IP: "embed-certs-724000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:27:24.652521   22829 kubeconfig.go:146] "embed-certs-724000" context is missing from /Users/jenkins/minikube-integration/15565-2556/kubeconfig - will repair!
	I0128 11:27:24.654011   22829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/kubeconfig: {Name:mk9285754a110019f97a480561fbfd0056cc86f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:24.655376   22829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:27:24.663615   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:24.663676   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:24.672572   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:25.174257   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:25.174386   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:25.185851   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:25.673426   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:25.673588   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:25.684679   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:26.173327   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:26.173406   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:26.183035   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:26.673176   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:26.673433   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:26.684761   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:27.173764   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:27.173984   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:27.185108   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:27.673397   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:27.673489   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:27.683218   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:28.174216   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:28.174366   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:28.185857   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:28.674148   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:28.674309   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:28.685202   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:29.172715   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:29.172846   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:29.182544   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:29.674170   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:29.674324   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:29.685365   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:30.174136   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:30.174245   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:30.185018   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:30.673454   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:30.673587   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:30.683750   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:31.174247   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:31.174443   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:31.185531   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:31.673024   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:31.673283   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:31.684420   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:32.172755   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:32.172860   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:32.183925   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:32.673541   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:32.673694   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:32.684281   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:33.174273   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:33.174387   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:33.185797   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:33.673904   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:33.674006   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:33.683958   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:34.174012   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:34.174137   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:34.185173   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:34.673806   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:34.673910   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:34.685973   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:34.685983   22829 api_server.go:165] Checking apiserver status ...
	I0128 11:27:34.686036   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:27:34.694505   22829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:34.694518   22829 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0128 11:27:34.694525   22829 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:27:34.694590   22829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:27:34.719178   22829 docker.go:456] Stopping containers: [6e525c8a7f73 a76bea4034ee 904e764c79b0 116c79310ecf f1e4d2a27a1c e8ff310c9c24 bb536ddd00ef fe3669ea02a0 fbe8be3de684 bf0fb156d942 02edb893bbeb 09431226fa3b 88ebec0ad694 fe540bec50a8 03b6e903bd28]
	I0128 11:27:34.719267   22829 ssh_runner.go:195] Run: docker stop 6e525c8a7f73 a76bea4034ee 904e764c79b0 116c79310ecf f1e4d2a27a1c e8ff310c9c24 bb536ddd00ef fe3669ea02a0 fbe8be3de684 bf0fb156d942 02edb893bbeb 09431226fa3b 88ebec0ad694 fe540bec50a8 03b6e903bd28
	I0128 11:27:34.744555   22829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:27:34.755205   22829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:27:34.763147   22829 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 28 19:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 28 19:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan 28 19:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 28 19:26 /etc/kubernetes/scheduler.conf
	
	I0128 11:27:34.763204   22829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:27:34.770917   22829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:27:34.778453   22829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:27:34.785859   22829 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:34.785940   22829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0128 11:27:34.793460   22829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:27:34.801217   22829 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:34.801274   22829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0128 11:27:34.808824   22829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:27:34.816453   22829 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:27:34.816463   22829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:34.871013   22829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:35.390226   22829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:35.523767   22829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:35.594873   22829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:35.702409   22829 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:27:35.702484   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:36.215879   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:36.715234   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:37.214025   22829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:37.226576   22829 api_server.go:71] duration metric: took 1.52417039s to wait for apiserver process to appear ...
	I0128 11:27:37.226591   22829 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:27:37.226604   22829 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55495/healthz ...
	I0128 11:27:39.091747   22829 api_server.go:278] https://127.0.0.1:55495/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0128 11:27:39.091777   22829 api_server.go:102] status: https://127.0.0.1:55495/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:27:39.593713   22829 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55495/healthz ...
	I0128 11:27:39.600607   22829 api_server.go:278] https://127.0.0.1:55495/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:27:39.600626   22829 api_server.go:102] status: https://127.0.0.1:55495/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:40.091873   22829 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55495/healthz ...
	I0128 11:27:40.096987   22829 api_server.go:278] https://127.0.0.1:55495/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:27:40.097000   22829 api_server.go:102] status: https://127.0.0.1:55495/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:40.591867   22829 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55495/healthz ...
	I0128 11:27:40.597292   22829 api_server.go:278] https://127.0.0.1:55495/healthz returned 200:
	ok
	I0128 11:27:40.604131   22829 api_server.go:140] control plane version: v1.26.1
	I0128 11:27:40.604144   22829 api_server.go:130] duration metric: took 3.377553113s to wait for apiserver health ...
	I0128 11:27:40.604151   22829 cni.go:84] Creating CNI manager for ""
	I0128 11:27:40.604160   22829 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:27:40.626024   22829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0128 11:27:40.663651   22829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0128 11:27:40.675689   22829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0128 11:27:40.689770   22829 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:27:40.697393   22829 system_pods.go:59] 8 kube-system pods found
	I0128 11:27:40.697410   22829 system_pods.go:61] "coredns-787d4945fb-62fcj" [d5b57880-0bec-4cb6-8490-028085438e55] Running
	I0128 11:27:40.697416   22829 system_pods.go:61] "etcd-embed-certs-724000" [18c2c0a4-11ce-4e2a-ac9d-bf5075857187] Running
	I0128 11:27:40.697419   22829 system_pods.go:61] "kube-apiserver-embed-certs-724000" [61a0e42b-4994-4fd2-843b-db820d5db1e1] Running
	I0128 11:27:40.697427   22829 system_pods.go:61] "kube-controller-manager-embed-certs-724000" [3c2c59eb-1251-4e94-81c1-1397dee036c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0128 11:27:40.697432   22829 system_pods.go:61] "kube-proxy-q8xdb" [8affc8fc-fba3-4a97-a06f-c214a0d45e55] Running
	I0128 11:27:40.697437   22829 system_pods.go:61] "kube-scheduler-embed-certs-724000" [9d865458-4abf-48e2-8c7a-17f2991dfb37] Running
	I0128 11:27:40.697442   22829 system_pods.go:61] "metrics-server-7997d45854-k5t4s" [5da4b20b-5695-4cb3-bc41-e98e7f827237] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0128 11:27:40.697446   22829 system_pods.go:61] "storage-provisioner" [5cef2681-9dbd-460e-a157-0fba0f4228f2] Running
	I0128 11:27:40.697450   22829 system_pods.go:74] duration metric: took 7.670236ms to wait for pod list to return data ...
	I0128 11:27:40.697456   22829 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:27:40.700882   22829 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0128 11:27:40.700900   22829 node_conditions.go:123] node cpu capacity is 6
	I0128 11:27:40.700910   22829 node_conditions.go:105] duration metric: took 3.450518ms to run NodePressure ...
	I0128 11:27:40.700926   22829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:41.010632   22829 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0128 11:27:41.015226   22829 kubeadm.go:784] kubelet initialised
	I0128 11:27:41.015239   22829 kubeadm.go:785] duration metric: took 4.591688ms waiting for restarted kubelet to initialise ...
	I0128 11:27:41.015247   22829 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 11:27:41.022281   22829 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-62fcj" in "kube-system" namespace to be "Ready" ...
	I0128 11:27:41.085136   22829 pod_ready.go:92] pod "coredns-787d4945fb-62fcj" in "kube-system" namespace has status "Ready":"True"
	I0128 11:27:41.085155   22829 pod_ready.go:81] duration metric: took 62.848873ms waiting for pod "coredns-787d4945fb-62fcj" in "kube-system" namespace to be "Ready" ...
	I0128 11:27:41.085165   22829 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-724000" in "kube-system" namespace to be "Ready" ...
	I0128 11:27:41.094127   22829 pod_ready.go:92] pod "etcd-embed-certs-724000" in "kube-system" namespace has status "Ready":"True"
	I0128 11:27:41.094141   22829 pod_ready.go:81] duration metric: took 8.970398ms waiting for pod "etcd-embed-certs-724000" in "kube-system" namespace to be "Ready" ...
	I0128 11:27:41.094156   22829 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-724000" in "kube-system" namespace to be "Ready" ...
	I0128 11:27:41.102142   22829 pod_ready.go:92] pod "kube-apiserver-embed-certs-724000" in "kube-system" namespace has status "Ready":"True"
	I0128 11:27:41.102153   22829 pod_ready.go:81] duration metric: took 7.990369ms waiting for pod "kube-apiserver-embed-certs-724000" in "kube-system" namespace to be "Ready" ...
	I0128 11:27:41.102166   22829 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-724000" in "kube-system" namespace to be "Ready" ...
	I0128 11:27:43.119785   22829 pod_ready.go:102] pod "kube-controller-manager-embed-certs-724000" in "kube-system" namespace has status "Ready":"False"
	I0128 11:27:45.620592   22829 pod_ready.go:102] pod "kube-controller-manager-embed-certs-724000" in "kube-system" namespace has status "Ready":"False"
	I0128 11:27:47.621144   22829 pod_ready.go:102] pod "kube-controller-manager-embed-certs-724000" in "kube-system" namespace has status "Ready":"False"
	I0128 11:27:47.737949   22056 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:27:47.738172   22056 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:27:47.738188   22056 kubeadm.go:322] 
	I0128 11:27:47.738238   22056 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:27:47.738279   22056 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:27:47.738284   22056 kubeadm.go:322] 
	I0128 11:27:47.738364   22056 kubeadm.go:322] This error is likely caused by:
	I0128 11:27:47.738417   22056 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:27:47.738551   22056 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:27:47.738568   22056 kubeadm.go:322] 
	I0128 11:27:47.738688   22056 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:27:47.738728   22056 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:27:47.738764   22056 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:27:47.738769   22056 kubeadm.go:322] 
	I0128 11:27:47.738891   22056 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:27:47.738995   22056 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:27:47.739112   22056 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:27:47.739180   22056 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:27:47.739265   22056 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:27:47.739307   22056 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:27:47.741658   22056 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:27:47.741725   22056 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:27:47.741828   22056 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:27:47.741917   22056 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:27:47.741984   22056 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:27:47.742053   22056 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 11:27:47.742072   22056 kubeadm.go:403] StartCluster complete in 8m4.712143834s
	I0128 11:27:47.742162   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:27:47.765882   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.765895   22056 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:27:47.765967   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:27:47.789785   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.789799   22056 logs.go:281] No container was found matching "etcd"
	I0128 11:27:47.789871   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:27:47.814163   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.814176   22056 logs.go:281] No container was found matching "coredns"
	I0128 11:27:47.814242   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:27:47.875047   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.875060   22056 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:27:47.875129   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:27:47.898104   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.898117   22056 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:27:47.898202   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:27:47.922817   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.922832   22056 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:27:47.922901   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:27:47.949939   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.949954   22056 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:27:47.950024   22056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:27:47.975256   22056 logs.go:279] 0 containers: []
	W0128 11:27:47.975269   22056 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:27:47.975277   22056 logs.go:124] Gathering logs for dmesg ...
	I0128 11:27:47.975284   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:27:47.994111   22056 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:27:47.994126   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:27:48.051842   22056 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:27:48.051853   22056 logs.go:124] Gathering logs for Docker ...
	I0128 11:27:48.051859   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:27:48.068182   22056 logs.go:124] Gathering logs for container status ...
	I0128 11:27:48.068196   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:27:50.120577   22056 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052372173s)
	I0128 11:27:50.120688   22056 logs.go:124] Gathering logs for kubelet ...
	I0128 11:27:50.120697   22056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0128 11:27:50.159662   22056 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 11:27:50.159684   22056 out.go:239] * 
	W0128 11:27:50.159792   22056 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:27:50.159833   22056 out.go:239] * 
	W0128 11:27:50.160477   22056 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 11:27:50.223100   22056 out.go:177] 
	W0128 11:27:50.281364   22056 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:27:50.281563   22056 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 11:27:50.281635   22056 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 11:27:50.355023   22056 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 19:19:39 UTC, end at Sat 2023-01-28 19:27:51 UTC. --
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.043684247Z" level=info msg="Processing signal 'terminated'"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044647330Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044903083Z" level=info msg="Daemon shutdown complete"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044949007Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: docker.service: Succeeded.
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.088592507Z" level=info msg="Starting up"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090253834Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090291624Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090307486Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090315468Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091569449Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091611067Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091623295Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091629230Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.098516161Z" level=info msg="Loading containers: start."
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.175682495Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.208852655Z" level=info msg="Loading containers: done."
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.216989221Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.217057643Z" level=info msg="Daemon has completed initialization"
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.241129214Z" level=info msg="API listen on [::]:2376"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.244202317Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-28T19:27:54Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jan28 18:55] hrtimer: interrupt took 1291156 ns
	
	* 
	* ==> kernel <==
	*  19:27:54 up  1:27,  0 users,  load average: 1.62, 1.15, 1.31
	Linux old-k8s-version-867000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 19:19:39 UTC, end at Sat 2023-01-28 19:27:54 UTC. --
	Jan 28 19:27:52 old-k8s-version-867000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:27:53 old-k8s-version-867000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jan 28 19:27:53 old-k8s-version-867000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:27:53 old-k8s-version-867000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:27:53 old-k8s-version-867000 kubelet[14912]: I0128 19:27:53.249746   14912 server.go:410] Version: v1.16.0
	Jan 28 19:27:53 old-k8s-version-867000 kubelet[14912]: I0128 19:27:53.250039   14912 plugins.go:100] No cloud provider specified.
	Jan 28 19:27:53 old-k8s-version-867000 kubelet[14912]: I0128 19:27:53.250075   14912 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:27:53 old-k8s-version-867000 kubelet[14912]: I0128 19:27:53.251790   14912 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:27:53 old-k8s-version-867000 kubelet[14912]: W0128 19:27:53.252658   14912 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:27:53 old-k8s-version-867000 kubelet[14912]: W0128 19:27:53.252729   14912 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:27:53 old-k8s-version-867000 kubelet[14912]: F0128 19:27:53.252753   14912 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:27:53 old-k8s-version-867000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:27:53 old-k8s-version-867000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:27:53 old-k8s-version-867000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jan 28 19:27:53 old-k8s-version-867000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:27:53 old-k8s-version-867000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:27:54 old-k8s-version-867000 kubelet[14924]: I0128 19:27:53.999937   14924 server.go:410] Version: v1.16.0
	Jan 28 19:27:54 old-k8s-version-867000 kubelet[14924]: I0128 19:27:54.000110   14924 plugins.go:100] No cloud provider specified.
	Jan 28 19:27:54 old-k8s-version-867000 kubelet[14924]: I0128 19:27:54.000120   14924 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:27:54 old-k8s-version-867000 kubelet[14924]: I0128 19:27:54.001866   14924 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:27:54 old-k8s-version-867000 kubelet[14924]: W0128 19:27:54.002575   14924 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:27:54 old-k8s-version-867000 kubelet[14924]: W0128 19:27:54.002647   14924 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:27:54 old-k8s-version-867000 kubelet[14924]: F0128 19:27:54.002679   14924 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:27:54 old-k8s-version-867000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:27:54 old-k8s-version-867000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:27:54.319444   22932 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 2 (433.599376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-867000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (497.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:28:06.822616    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:28:47.331828    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:29:04.217983    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:29:16.635070    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 11:29:17.525160    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:29:23.096095    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:29:30.288715    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:29:40.463737    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:30:53.340918    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:31:02.047613    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:31:03.510918    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:31:08.751875    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:31:09.019533    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:09.024761    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:09.035568    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:09.056449    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:09.098316    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:09.180479    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:09.341763    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:09.663918    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:10.304976    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:11.585211    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:31:14.145699    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:31:19.267835    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:31:24.330617    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:31:29.508497    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:31:49.990701    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:32:19.685677    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:32:30.950983    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
E0128 11:32:31.799157    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:32:47.378402    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:32:49.706690    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:32:54.483785    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:33:06.822242    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:33:47.329443    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 11:33:52.872134    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:34:04.218350    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:34:12.759063    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:34:16.635036    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:34:23.095781    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
E0128 11:34:29.933002    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:34:30.286334    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:34:40.462378    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:35:27.267062    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:35:46.143805    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:36:02.047014    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:36:08.750207    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:36:09.019073    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:36:24.330008    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 2 (445.221398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-867000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-867000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-867000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4",
	        "Created": "2023-01-28T19:14:00.935880886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307380,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:19:38.982033396Z",
	            "FinishedAt": "2023-01-28T19:19:35.984970564Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hosts",
	        "LogPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4-json.log",
	        "Name": "/old-k8s-version-867000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-867000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-867000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-867000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-867000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-867000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56bc2e5c762ee218e9cc648a942743397f45d38fe7e80bb7ebfa5abcf2ee1586",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55320"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55321"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55322"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55323"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55319"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/56bc2e5c762e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-867000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fd303c5a473",
	                        "old-k8s-version-867000"
	                    ],
	                    "NetworkID": "05da8fabe29d00d6e3eb58e11e2bbe3932ea7f3d437268a555d06945d4a9c8c9",
	                    "EndpointID": "fc33025b57ea548e3024d3d6addb6d5cbf64cfd4291900853273d019fcc07246",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 2 (423.175608ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-867000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-867000 logs -n 25: (3.560068295s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-625000        | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:16 PST | 28 Jan 23 11:16 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p no-preload-625000                              | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:16 PST | 28 Jan 23 11:16 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625000             | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:16 PST | 28 Jan 23 11:16 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625000                              | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:16 PST | 28 Jan 23 11:25 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-867000   | old-k8s-version-867000       | jenkins | v1.29.0 | 28 Jan 23 11:18 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867000                         | old-k8s-version-867000       | jenkins | v1.29.0 | 28 Jan 23 11:19 PST | 28 Jan 23 11:19 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867000        | old-k8s-version-867000       | jenkins | v1.29.0 | 28 Jan 23 11:19 PST | 28 Jan 23 11:19 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867000                         | old-k8s-version-867000       | jenkins | v1.29.0 | 28 Jan 23 11:19 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-625000 sudo                         | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-625000                              | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-625000                              | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-625000                              | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	| delete  | -p no-preload-625000                              | no-preload-625000            | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	| start   | -p embed-certs-724000                             | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:26 PST | 28 Jan 23 11:26 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-724000       | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-724000                             | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-724000            | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-724000                             | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:27 PST | 28 Jan 23 11:36 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-724000 sudo                        | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-724000                             | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-724000                             | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-724000                             | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	| delete  | -p embed-certs-724000                             | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	| delete  | -p                                                | disable-driver-mounts-170000 | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	|         | disable-driver-mounts-170000                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:36 PST |                     |
	|         | default-k8s-diff-port-218000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 11:36:53
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 11:36:53.086249   23610 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:36:53.086502   23610 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:36:53.086507   23610 out.go:309] Setting ErrFile to fd 2...
	I0128 11:36:53.086511   23610 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:36:53.086627   23610 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 11:36:53.087177   23610 out.go:303] Setting JSON to false
	I0128 11:36:53.106131   23610 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5788,"bootTime":1674928825,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 11:36:53.106232   23610 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:36:53.128018   23610 out.go:177] * [default-k8s-diff-port-218000] minikube v1.29.0 on Darwin 13.2
	I0128 11:36:53.171229   23610 notify.go:220] Checking for updates...
	I0128 11:36:53.192215   23610 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:36:53.213322   23610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:36:53.255249   23610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:36:53.276375   23610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:36:53.297207   23610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 11:36:53.318590   23610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:36:53.341458   23610 config.go:180] Loaded profile config "old-k8s-version-867000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 11:36:53.341537   23610 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:36:53.404006   23610 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:36:53.404142   23610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:36:53.550637   23610 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 19:36:53.456397518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:36:53.594568   23610 out.go:177] * Using the docker driver based on user configuration
	I0128 11:36:53.616509   23610 start.go:296] selected driver: docker
	I0128 11:36:53.616547   23610 start.go:857] validating driver "docker" against <nil>
	I0128 11:36:53.616572   23610 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:36:53.620605   23610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:36:53.769382   23610 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 19:36:53.672719658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:36:53.769482   23610 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 11:36:53.769671   23610 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 11:36:53.790826   23610 out.go:177] * Using Docker Desktop driver with root privileges
	I0128 11:36:53.812710   23610 cni.go:84] Creating CNI manager for ""
	I0128 11:36:53.812748   23610 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:36:53.812764   23610 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0128 11:36:53.812784   23610 start_flags.go:319] config:
	{Name:default-k8s-diff-port-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-218000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:36:53.855530   23610 out.go:177] * Starting control plane node default-k8s-diff-port-218000 in cluster default-k8s-diff-port-218000
	I0128 11:36:53.876926   23610 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:36:53.898680   23610 out.go:177] * Pulling base image ...
	I0128 11:36:53.940546   23610 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:36:53.940557   23610 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:36:53.940596   23610 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 11:36:53.940607   23610 cache.go:57] Caching tarball of preloaded images
	I0128 11:36:53.940735   23610 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:36:53.940745   23610 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 11:36:53.941298   23610 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/config.json ...
	I0128 11:36:53.941376   23610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/config.json: {Name:mk62b38c620aa89f4198438b3907b0f6006db109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:36:54.000139   23610 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:36:54.000159   23610 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:36:54.000176   23610 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:36:54.000218   23610 start.go:364] acquiring machines lock for default-k8s-diff-port-218000: {Name:mkd7fdd0aa8ccea55071ebdb54854a4e9f164099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:36:54.000391   23610 start.go:368] acquired machines lock for "default-k8s-diff-port-218000" in 161.895µs
	I0128 11:36:54.000420   23610 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-218000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:36:54.000515   23610 start.go:125] createHost starting for "" (driver="docker")
	I0128 11:36:54.022529   23610 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0128 11:36:54.022940   23610 start.go:159] libmachine.API.Create for "default-k8s-diff-port-218000" (driver="docker")
	I0128 11:36:54.022996   23610 client.go:168] LocalClient.Create starting
	I0128 11:36:54.023163   23610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem
	I0128 11:36:54.023252   23610 main.go:141] libmachine: Decoding PEM data...
	I0128 11:36:54.023284   23610 main.go:141] libmachine: Parsing certificate...
	I0128 11:36:54.023419   23610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem
	I0128 11:36:54.023507   23610 main.go:141] libmachine: Decoding PEM data...
	I0128 11:36:54.023533   23610 main.go:141] libmachine: Parsing certificate...
	I0128 11:36:54.024388   23610 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 11:36:54.081086   23610 cli_runner.go:211] docker network inspect default-k8s-diff-port-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 11:36:54.081184   23610 network_create.go:281] running [docker network inspect default-k8s-diff-port-218000] to gather additional debugging logs...
	I0128 11:36:54.081208   23610 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-218000
	W0128 11:36:54.137737   23610 cli_runner.go:211] docker network inspect default-k8s-diff-port-218000 returned with exit code 1
	I0128 11:36:54.137767   23610 network_create.go:284] error running [docker network inspect default-k8s-diff-port-218000]: docker network inspect default-k8s-diff-port-218000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-218000
	I0128 11:36:54.137780   23610 network_create.go:286] output of [docker network inspect default-k8s-diff-port-218000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-218000
	
	** /stderr **
	I0128 11:36:54.137869   23610 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 11:36:54.197806   23610 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:36:54.198157   23610 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ecdc00}
	I0128 11:36:54.198174   23610 network_create.go:123] attempt to create docker network default-k8s-diff-port-218000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0128 11:36:54.198247   23610 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-218000 default-k8s-diff-port-218000
	W0128 11:36:54.254609   23610 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-218000 default-k8s-diff-port-218000 returned with exit code 1
	W0128 11:36:54.254650   23610 network_create.go:148] failed to create docker network default-k8s-diff-port-218000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-218000 default-k8s-diff-port-218000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0128 11:36:54.254668   23610 network_create.go:115] failed to create docker network default-k8s-diff-port-218000 192.168.58.0/24, will retry: subnet is taken
	I0128 11:36:54.256243   23610 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:36:54.256565   23610 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000dfa2e0}
	I0128 11:36:54.256575   23610 network_create.go:123] attempt to create docker network default-k8s-diff-port-218000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0128 11:36:54.256644   23610 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-218000 default-k8s-diff-port-218000
	I0128 11:36:54.347406   23610 network_create.go:107] docker network default-k8s-diff-port-218000 192.168.67.0/24 created
	I0128 11:36:54.347438   23610 kic.go:117] calculated static IP "192.168.67.2" for the "default-k8s-diff-port-218000" container
	I0128 11:36:54.347563   23610 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 11:36:54.407795   23610 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-218000 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-218000 --label created_by.minikube.sigs.k8s.io=true
	I0128 11:36:54.465289   23610 oci.go:103] Successfully created a docker volume default-k8s-diff-port-218000
	I0128 11:36:54.465400   23610 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-218000 --entrypoint /usr/bin/test -v default-k8s-diff-port-218000:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 11:36:54.901663   23610 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-218000
	I0128 11:36:54.901704   23610 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:36:54.901720   23610 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 11:36:54.901829   23610 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-218000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 11:37:01.684298   23610 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-218000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (6.782392749s)
	I0128 11:37:01.684318   23610 kic.go:199] duration metric: took 6.782607 seconds to extract preloaded images to volume
	I0128 11:37:01.684430   23610 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 11:37:01.830161   23610 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-218000 --name default-k8s-diff-port-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-218000 --network default-k8s-diff-port-218000 --ip 192.168.67.2 --volume default-k8s-diff-port-218000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 11:37:02.195868   23610 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-218000 --format={{.State.Running}}
	I0128 11:37:02.266622   23610 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-218000 --format={{.State.Status}}
	I0128 11:37:02.332845   23610 cli_runner.go:164] Run: docker exec default-k8s-diff-port-218000 stat /var/lib/dpkg/alternatives/iptables
	I0128 11:37:02.452495   23610 oci.go:144] the created container "default-k8s-diff-port-218000" has a running status.
	I0128 11:37:02.452556   23610 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/default-k8s-diff-port-218000/id_rsa...
	I0128 11:37:02.590529   23610 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/default-k8s-diff-port-218000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 11:37:02.699836   23610 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-218000 --format={{.State.Status}}
	I0128 11:37:02.758964   23610 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 11:37:02.758987   23610 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-218000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 11:37:02.924581   23610 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-218000 --format={{.State.Status}}
	I0128 11:37:02.986835   23610 machine.go:88] provisioning docker machine ...
	I0128 11:37:02.986893   23610 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-218000"
	I0128 11:37:02.987013   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:03.048691   23610 main.go:141] libmachine: Using SSH client type: native
	I0128 11:37:03.048884   23610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56147 <nil> <nil>}
	I0128 11:37:03.048900   23610 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-218000 && echo "default-k8s-diff-port-218000" | sudo tee /etc/hostname
	I0128 11:37:03.192287   23610 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-218000
	
	I0128 11:37:03.192400   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:03.252204   23610 main.go:141] libmachine: Using SSH client type: native
	I0128 11:37:03.252359   23610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56147 <nil> <nil>}
	I0128 11:37:03.252377   23610 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-218000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-218000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-218000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:37:03.387259   23610 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:37:03.387286   23610 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2556/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2556/.minikube}
	I0128 11:37:03.387311   23610 ubuntu.go:177] setting up certificates
	I0128 11:37:03.387325   23610 provision.go:83] configureAuth start
	I0128 11:37:03.387413   23610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-218000
	I0128 11:37:03.445735   23610 provision.go:138] copyHostCerts
	I0128 11:37:03.445835   23610 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem, removing ...
	I0128 11:37:03.445843   23610 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 11:37:03.445959   23610 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem (1123 bytes)
	I0128 11:37:03.446167   23610 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem, removing ...
	I0128 11:37:03.446174   23610 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 11:37:03.446241   23610 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem (1679 bytes)
	I0128 11:37:03.446392   23610 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem, removing ...
	I0128 11:37:03.446400   23610 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 11:37:03.446465   23610 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem (1082 bytes)
	I0128 11:37:03.446590   23610 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-218000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-218000]
	I0128 11:37:03.521738   23610 provision.go:172] copyRemoteCerts
	I0128 11:37:03.521801   23610 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:37:03.521854   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:03.581509   23610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56147 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/default-k8s-diff-port-218000/id_rsa Username:docker}
	I0128 11:37:03.678463   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:37:03.697398   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0128 11:37:03.717167   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 11:37:03.736716   23610 provision.go:86] duration metric: configureAuth took 349.378426ms
	I0128 11:37:03.736732   23610 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:37:03.736902   23610 config.go:180] Loaded profile config "default-k8s-diff-port-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:37:03.736976   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:03.797502   23610 main.go:141] libmachine: Using SSH client type: native
	I0128 11:37:03.797661   23610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56147 <nil> <nil>}
	I0128 11:37:03.797676   23610 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:37:03.934352   23610 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:37:03.934365   23610 ubuntu.go:71] root file system type: overlay
	I0128 11:37:03.934510   23610 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:37:03.934589   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:03.994623   23610 main.go:141] libmachine: Using SSH client type: native
	I0128 11:37:03.994781   23610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56147 <nil> <nil>}
	I0128 11:37:03.994853   23610 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:37:04.138796   23610 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:37:04.138888   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:04.198140   23610 main.go:141] libmachine: Using SSH client type: native
	I0128 11:37:04.198307   23610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56147 <nil> <nil>}
	I0128 11:37:04.198322   23610 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:37:04.804299   23610 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:37:04.136468907 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 11:37:04.804321   23610 machine.go:91] provisioned docker machine in 1.817457728s
	I0128 11:37:04.804328   23610 client.go:171] LocalClient.Create took 10.78133764s
	I0128 11:37:04.804352   23610 start.go:167] duration metric: libmachine.API.Create for "default-k8s-diff-port-218000" took 10.781426153s
	I0128 11:37:04.804362   23610 start.go:300] post-start starting for "default-k8s-diff-port-218000" (driver="docker")
	I0128 11:37:04.804368   23610 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:37:04.804446   23610 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:37:04.804504   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:04.865812   23610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56147 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/default-k8s-diff-port-218000/id_rsa Username:docker}
	I0128 11:37:04.961312   23610 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:37:04.965187   23610 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:37:04.965206   23610 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:37:04.965214   23610 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:37:04.965220   23610 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:37:04.965230   23610 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/addons for local assets ...
	I0128 11:37:04.965352   23610 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/files for local assets ...
	I0128 11:37:04.965540   23610 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> 38492.pem in /etc/ssl/certs
	I0128 11:37:04.965736   23610 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:37:04.973148   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:37:04.990766   23610 start.go:303] post-start completed in 186.379715ms
	I0128 11:37:04.991423   23610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-218000
	I0128 11:37:05.051902   23610 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/config.json ...
	I0128 11:37:05.052350   23610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:37:05.052413   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:05.113030   23610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56147 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/default-k8s-diff-port-218000/id_rsa Username:docker}
	I0128 11:37:05.204499   23610 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:37:05.209504   23610 start.go:128] duration metric: createHost completed in 11.208985701s
	I0128 11:37:05.209531   23610 start.go:83] releasing machines lock for "default-k8s-diff-port-218000", held for 11.209139003s
	I0128 11:37:05.209636   23610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-218000
	I0128 11:37:05.271730   23610 ssh_runner.go:195] Run: cat /version.json
	I0128 11:37:05.271762   23610 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:37:05.271794   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:05.271825   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:05.334513   23610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56147 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/default-k8s-diff-port-218000/id_rsa Username:docker}
	I0128 11:37:05.334740   23610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56147 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/default-k8s-diff-port-218000/id_rsa Username:docker}
	I0128 11:37:05.427800   23610 ssh_runner.go:195] Run: systemctl --version
	I0128 11:37:05.486250   23610 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:37:05.491939   23610 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:37:05.512843   23610 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:37:05.512963   23610 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:37:05.520996   23610 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:37:05.534376   23610 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 11:37:05.549314   23610 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 11:37:05.549330   23610 start.go:483] detecting cgroup driver to use...
	I0128 11:37:05.549359   23610 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:37:05.549501   23610 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:37:05.562861   23610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:37:05.571844   23610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:37:05.580606   23610 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:37:05.580685   23610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:37:05.589758   23610 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:37:05.598419   23610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:37:05.606914   23610 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:37:05.615408   23610 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:37:05.623566   23610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:37:05.632069   23610 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:37:05.639776   23610 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:37:05.646831   23610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:37:05.713157   23610 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:37:05.784648   23610 start.go:483] detecting cgroup driver to use...
	I0128 11:37:05.784672   23610 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:37:05.784742   23610 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:37:05.797252   23610 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:37:05.797324   23610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:37:05.808560   23610 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:37:05.823259   23610 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:37:05.919868   23610 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:37:06.008131   23610 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:37:06.008148   23610 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:37:06.021992   23610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:37:06.122067   23610 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:37:06.331299   23610 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:37:06.404980   23610 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:37:06.478057   23610 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:37:06.548701   23610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:37:06.617162   23610 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:37:06.629739   23610 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:37:06.629828   23610 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:37:06.634559   23610 start.go:551] Will wait 60s for crictl version
	I0128 11:37:06.634613   23610 ssh_runner.go:195] Run: which crictl
	I0128 11:37:06.638467   23610 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:37:06.757002   23610 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:37:06.757084   23610 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:37:06.788550   23610 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:37:06.862844   23610 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:37:06.863044   23610 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-218000 dig +short host.docker.internal
	I0128 11:37:06.983236   23610 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:37:06.983352   23610 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:37:06.987818   23610 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:37:06.997838   23610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-218000
	I0128 11:37:07.059584   23610 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:37:07.059677   23610 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:37:07.084435   23610 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:37:07.084453   23610 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:37:07.084546   23610 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:37:07.109077   23610 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:37:07.109115   23610 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:37:07.109214   23610 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:37:07.181351   23610 cni.go:84] Creating CNI manager for ""
	I0128 11:37:07.181369   23610 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:37:07.181386   23610 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:37:07.181409   23610 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-218000 NodeName:default-k8s-diff-port-218000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:37:07.181538   23610 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-218000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:37:07.181624   23610 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-218000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-218000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0128 11:37:07.181690   23610 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:37:07.190114   23610 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:37:07.190187   23610 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:37:07.197690   23610 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (460 bytes)
	I0128 11:37:07.210787   23610 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:37:07.224587   23610 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0128 11:37:07.238133   23610 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:37:07.242025   23610 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:37:07.252368   23610 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000 for IP: 192.168.67.2
	I0128 11:37:07.252386   23610 certs.go:186] acquiring lock for shared ca certs: {Name:mkee0a6d4b79657122da9b64494daa75cd779ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:37:07.252577   23610 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key
	I0128 11:37:07.252647   23610 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key
	I0128 11:37:07.252686   23610 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/client.key
	I0128 11:37:07.252702   23610 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/client.crt with IP's: []
	I0128 11:37:07.565933   23610 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/client.crt ...
	I0128 11:37:07.565951   23610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/client.crt: {Name:mk963e99d3035a3fdd356a48e1dbdb1a7ca72603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:37:07.566299   23610 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/client.key ...
	I0128 11:37:07.566307   23610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/client.key: {Name:mk69e368e273946a486a405df54d9cef30587351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:37:07.566510   23610 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.key.c7fa3a9e
	I0128 11:37:07.566531   23610 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 11:37:07.665419   23610 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.crt.c7fa3a9e ...
	I0128 11:37:07.665435   23610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.crt.c7fa3a9e: {Name:mk681665e5d53733342c44108a5e4a459700b4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:37:07.665726   23610 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.key.c7fa3a9e ...
	I0128 11:37:07.665739   23610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.key.c7fa3a9e: {Name:mk334bf7781b04157657487f4dbadc89e1fac5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:37:07.665948   23610 certs.go:333] copying /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.crt
	I0128 11:37:07.666135   23610 certs.go:337] copying /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.key
	I0128 11:37:07.666314   23610 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/proxy-client.key
	I0128 11:37:07.666330   23610 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/proxy-client.crt with IP's: []
	I0128 11:37:07.734218   23610 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/proxy-client.crt ...
	I0128 11:37:07.734228   23610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/proxy-client.crt: {Name:mk18393acd714eb37c9ccbfd364763364567af65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:37:07.734453   23610 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/proxy-client.key ...
	I0128 11:37:07.734464   23610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/proxy-client.key: {Name:mk5e41d4956bf521783faab32226a36a77193e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:37:07.734884   23610 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem (1338 bytes)
	W0128 11:37:07.734936   23610 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849_empty.pem, impossibly tiny 0 bytes
	I0128 11:37:07.734947   23610 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 11:37:07.734984   23610 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:37:07.735024   23610 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:37:07.735056   23610 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem (1679 bytes)
	I0128 11:37:07.735131   23610 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:37:07.735647   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:37:07.754343   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 11:37:07.772182   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:37:07.789855   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/default-k8s-diff-port-218000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0128 11:37:07.807636   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:37:07.825369   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 11:37:07.843046   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:37:07.860398   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 11:37:07.877791   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /usr/share/ca-certificates/38492.pem (1708 bytes)
	I0128 11:37:07.895622   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:37:07.913607   23610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem --> /usr/share/ca-certificates/3849.pem (1338 bytes)
	I0128 11:37:07.931466   23610 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 11:37:07.945013   23610 ssh_runner.go:195] Run: openssl version
	I0128 11:37:07.951028   23610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:37:07.959609   23610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:37:07.963784   23610 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:37:07.963830   23610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:37:07.969316   23610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:37:07.977589   23610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3849.pem && ln -fs /usr/share/ca-certificates/3849.pem /etc/ssl/certs/3849.pem"
	I0128 11:37:07.986027   23610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3849.pem
	I0128 11:37:07.990329   23610 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:26 /usr/share/ca-certificates/3849.pem
	I0128 11:37:07.990381   23610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3849.pem
	I0128 11:37:07.996037   23610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3849.pem /etc/ssl/certs/51391683.0"
	I0128 11:37:08.004400   23610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38492.pem && ln -fs /usr/share/ca-certificates/38492.pem /etc/ssl/certs/38492.pem"
	I0128 11:37:08.013154   23610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38492.pem
	I0128 11:37:08.017765   23610 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:26 /usr/share/ca-certificates/38492.pem
	I0128 11:37:08.017811   23610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38492.pem
	I0128 11:37:08.023548   23610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38492.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:37:08.031924   23610 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-218000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:37:08.032025   23610 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:37:08.055258   23610 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:37:08.063485   23610 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:37:08.071284   23610 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:37:08.071372   23610 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:37:08.078931   23610 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:37:08.078958   23610 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:37:08.133143   23610 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0128 11:37:08.133189   23610 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:37:08.244529   23610 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:37:08.244665   23610 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:37:08.244802   23610 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:37:08.380644   23610 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:37:08.402199   23610 out.go:204]   - Generating certificates and keys ...
	I0128 11:37:08.402278   23610 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:37:08.402361   23610 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:37:08.534419   23610 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 11:37:08.589342   23610 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 11:37:08.743918   23610 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 11:37:09.064967   23610 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 11:37:09.267559   23610 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 11:37:09.267690   23610 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-218000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0128 11:37:09.325649   23610 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 11:37:09.325794   23610 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-218000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0128 11:37:09.398465   23610 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 11:37:09.466621   23610 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 11:37:09.533908   23610 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 11:37:09.533959   23610 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:37:09.738377   23610 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:37:09.791665   23610 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:37:09.914538   23610 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:37:10.010526   23610 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:37:10.022078   23610 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:37:10.023001   23610 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:37:10.023047   23610 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0128 11:37:10.107124   23610 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:37:10.128846   23610 out.go:204]   - Booting up control plane ...
	I0128 11:37:10.128984   23610 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:37:10.129111   23610 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:37:10.129224   23610 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:37:10.129402   23610 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:37:10.129575   23610 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 19:19:39 UTC, end at Sat 2023-01-28 19:37:27 UTC. --
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.043684247Z" level=info msg="Processing signal 'terminated'"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044647330Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044903083Z" level=info msg="Daemon shutdown complete"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044949007Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: docker.service: Succeeded.
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.088592507Z" level=info msg="Starting up"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090253834Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090291624Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090307486Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090315468Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091569449Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091611067Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091623295Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091629230Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.098516161Z" level=info msg="Loading containers: start."
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.175682495Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.208852655Z" level=info msg="Loading containers: done."
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.216989221Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.217057643Z" level=info msg="Daemon has completed initialization"
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.241129214Z" level=info msg="API listen on [::]:2376"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.244202317Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-28T19:37:29Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jan28 18:55] hrtimer: interrupt took 1291156 ns
	
	* 
	* ==> kernel <==
	*  19:37:29 up  1:36,  0 users,  load average: 2.80, 1.18, 1.08
	Linux old-k8s-version-867000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 19:19:39 UTC, end at Sat 2023-01-28 19:37:29 UTC. --
	Jan 28 19:37:27 old-k8s-version-867000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:37:28 old-k8s-version-867000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jan 28 19:37:28 old-k8s-version-867000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:37:28 old-k8s-version-867000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:37:28 old-k8s-version-867000 kubelet[25150]: I0128 19:37:28.502470   25150 server.go:410] Version: v1.16.0
	Jan 28 19:37:28 old-k8s-version-867000 kubelet[25150]: I0128 19:37:28.502798   25150 plugins.go:100] No cloud provider specified.
	Jan 28 19:37:28 old-k8s-version-867000 kubelet[25150]: I0128 19:37:28.502833   25150 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:37:28 old-k8s-version-867000 kubelet[25150]: I0128 19:37:28.506206   25150 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:37:28 old-k8s-version-867000 kubelet[25150]: W0128 19:37:28.506913   25150 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:37:28 old-k8s-version-867000 kubelet[25150]: W0128 19:37:28.507021   25150 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:37:28 old-k8s-version-867000 kubelet[25150]: F0128 19:37:28.507044   25150 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:37:28 old-k8s-version-867000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:37:28 old-k8s-version-867000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:37:29 old-k8s-version-867000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Jan 28 19:37:29 old-k8s-version-867000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:37:29 old-k8s-version-867000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:37:29 old-k8s-version-867000 kubelet[25164]: I0128 19:37:29.252715   25164 server.go:410] Version: v1.16.0
	Jan 28 19:37:29 old-k8s-version-867000 kubelet[25164]: I0128 19:37:29.252993   25164 plugins.go:100] No cloud provider specified.
	Jan 28 19:37:29 old-k8s-version-867000 kubelet[25164]: I0128 19:37:29.253034   25164 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:37:29 old-k8s-version-867000 kubelet[25164]: I0128 19:37:29.254786   25164 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:37:29 old-k8s-version-867000 kubelet[25164]: W0128 19:37:29.255415   25164 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:37:29 old-k8s-version-867000 kubelet[25164]: W0128 19:37:29.255482   25164 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:37:29 old-k8s-version-867000 kubelet[25164]: F0128 19:37:29.255507   25164 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:37:29 old-k8s-version-867000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:37:29 old-k8s-version-867000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:37:29.375012   23759 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 2 (438.75903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-867000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:38:47.329051    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:39:04.218268    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:39:16.634619    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:39:23.095806    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:39:30.286291    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:39:40.462029    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:40:10.417903    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:41:02.047395    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:41:08.749891    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:41:09.018699    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:41:24.331484    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:42:49.706258    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:42:54.481590    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:43:06.821598    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55319/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:43:47.328835    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:44:04.217079    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:44:05.090674    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:44:16.635846    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:44:30.286825    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:44:40.532365    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:45:57.598167    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:46:02.118842    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:46:08.823384    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:46:09.090785    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 11:46:24.401914    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 2 (407.941856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-867000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-867000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-867000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.724µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-867000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-867000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-867000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4",
	        "Created": "2023-01-28T19:14:00.935880886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307380,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:19:38.982033396Z",
	            "FinishedAt": "2023-01-28T19:19:35.984970564Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/hosts",
	        "LogPath": "/var/lib/docker/containers/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4/6fd303c5a4731b02b4c33f7ed54a03d5f05077fd2b6a0c5bd12077806d5484b4-json.log",
	        "Name": "/old-k8s-version-867000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-867000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-867000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb-init/diff:/var/lib/docker/overlay2/79142c1bfb7158d99171fa66335a26cb13f2a8a8cbfa638c237bffed361e3a89/diff:/var/lib/docker/overlay2/e0e1fdc3516530c880b802c7996a1b8ce82ca95934105a41f254ec5137fd39e2/diff:/var/lib/docker/overlay2/fac4ba40ee55baf2806d43764f680a3decaf4fd8b368bbaea39adb65c5622ca5/diff:/var/lib/docker/overlay2/e441c934bc48afc9e7a3386f7c74fe0c20393e198bcc7743e8f9afdf05efe326/diff:/var/lib/docker/overlay2/d39bd1a2e948812990ca711385ce5fa40c2dc4804530a386d99c80b7f7bf6fe2/diff:/var/lib/docker/overlay2/95e11949c4f876ab44bd0adcbe6781a6adf47c3ff9e63ec97fd322466164eb6d/diff:/var/lib/docker/overlay2/63d3d3a1f2065070879db8c5bfb59a21af9a85c0fc71bc3330bd7cf269f4662b/diff:/var/lib/docker/overlay2/4d7e309fbeb00560ca7804415050f0c209f3d375cbbf236c28c11c70436d4ae7/diff:/var/lib/docker/overlay2/ba0d0617dbaa03778329a421f7fa9c42f1bb9e1e193a334dcd28b9dd83d388ed/diff:/var/lib/docker/overlay2/64bc6c
4c97d7afd10818bb2aee713912c62e6c6bad568397a334214568819094/diff:/var/lib/docker/overlay2/9eea8322dbca25f19e6b165fe69b4576c511d61932fa9488f54100b4adeda168/diff:/var/lib/docker/overlay2/ec78b4d745f797c0757e92219d722526d17cc9334aa98eb28fd654323898f059/diff:/var/lib/docker/overlay2/b63329cd62781660f2238fbcf704c8eebb0ea9c063e7692adfb1d54a5956b76a/diff:/var/lib/docker/overlay2/be4ad500dc73dc0f7a89386a220fda9a34cf83a2943e0df5f43e79bfeeec6dfb/diff:/var/lib/docker/overlay2/cc506fb9628569db47233dde2107f623c36f8706857dc9175ecc18da27f21ca9/diff:/var/lib/docker/overlay2/d3fbb137518a7e6371da37751ff1fb77c913000ef6751293d79279f527c805d0/diff:/var/lib/docker/overlay2/de9b2061ccfcc155f185f7ab9847b5efdcdc77c3dd2e26c7e010b4786b19466e/diff:/var/lib/docker/overlay2/47068d751b648d6786ed5645603f9500f2d3549961d067c28722a53af0072a33/diff:/var/lib/docker/overlay2/6404c0f71023a39e6175130d1bfc9a1f4d2eae9a418fb7e0d42c0a65317606c7/diff:/var/lib/docker/overlay2/bd3f3a98034631dd17e4c4d411d8babd82c3bf642410f52f8af6f71acbc09106/diff:/var/lib/d
ocker/overlay2/4e0a7618854eea772703e589408f79580161b9177c879421f2f691c46d58a60a/diff:/var/lib/docker/overlay2/782fb02ecc3c1bc71373ff3d8b713b2bc4d26a60de3da9576878ade33b4992ee/diff:/var/lib/docker/overlay2/7533e761a436c07c8d9cd30e8b859b1f85de596787d3e4f00ba2fc87c8e08809/diff:/var/lib/docker/overlay2/8fa41de6ca6cee76164e50650a0b671b453322b8cada6868d2090bdc55dca493/diff:/var/lib/docker/overlay2/dcac84778124f3f93c0704b8ce7a776f24b386bba206afb9fa8657f6361de17b/diff:/var/lib/docker/overlay2/38476836b7aea22bb21f8df4c5d24ca581ec51456577cbc587735fd7632f83ec/diff:/var/lib/docker/overlay2/b180f265391afb4bbd508de68ada783469c21c620f1796782ffb3b573f7e70a2/diff:/var/lib/docker/overlay2/e13f4fcd119c410ddd745414d8b1d0ae30714a3cdbe36d7b01819005d41464aa/diff:/var/lib/docker/overlay2/690e7538a41741ca2ccf5aeec1133ccbc188dc6cc1dce00935059a30f6cb0c9b/diff:/var/lib/docker/overlay2/1765a1cbadca6aa0cdaaba760dedeba82919d483a8ad99943e888f737518b687/diff:/var/lib/docker/overlay2/2d7069c458db8901c6e152ca71b0aaa1ddb0a3457c7c8fb7bb040671d2b
a42ae/diff:/var/lib/docker/overlay2/7e4848df7b6b74fc7d6c4c0fc99075bdb69362e7527b6f677e7d2124d02cecd1/diff:/var/lib/docker/overlay2/c6645f05d6483a2e5e109899c766fee254537cb272ed8b25f40da02dec68bd0a/diff:/var/lib/docker/overlay2/eec788e4d45314574efe5c16f7502c0f5a09febe1c8ee35a5180259889f8257f/diff:/var/lib/docker/overlay2/45cd4b08a855f084c1c06a65f871df9287fe4fa5492eb93ea8c5806f8902af34/diff:/var/lib/docker/overlay2/bc8f511ffbc35a69047b9052add80532a88f0a305785aa0ffecee72babecdb6c/diff:/var/lib/docker/overlay2/72b0909462bee1f7a5f130f21715b150d3ed694f6d1f8f94bebc3b882ffd37b4/diff:/var/lib/docker/overlay2/8989993d4ea98ef674ee8268e3df0a227279d8ecd9c6cc96bde872992753da1f/diff:/var/lib/docker/overlay2/f914250e3f8befc8b24c98ac5561328b3df75d319ed91a9d1efe4287edf819ed/diff:/var/lib/docker/overlay2/00034316e473aca001ab0dceff5d356002633ffac50bc9df58da1c6c6bd9dc1b/diff:/var/lib/docker/overlay2/c321f77609367af7b9b056846695b79a6ca7011dae1346ccb7b268424d848661/diff:/var/lib/docker/overlay2/791cadd07a627ebff13560c239469308a2ad30
659ca32e469a18745c54fcc7fe/diff:/var/lib/docker/overlay2/67a4def3de9e3f2fe0bf3da0abe7b7679ee2a173be572e7ebdc5bab7db1c321b/diff:/var/lib/docker/overlay2/9f1255e61d7efdef3846a0ec873eb647e15ce7d8183aacccf1e9790726dbebcd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6248d9ed8b6aecbed69ddb29d60b03a9ba849c6a97ad9d5e99f1e4969e7a34cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-867000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-867000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-867000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-867000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56bc2e5c762ee218e9cc648a942743397f45d38fe7e80bb7ebfa5abcf2ee1586",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55320"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55321"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55322"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55323"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55319"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/56bc2e5c762e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-867000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fd303c5a473",
	                        "old-k8s-version-867000"
	                    ],
	                    "NetworkID": "05da8fabe29d00d6e3eb58e11e2bbe3932ea7f3d437268a555d06945d4a9c8c9",
	                    "EndpointID": "fc33025b57ea548e3024d3d6addb6d5cbf64cfd4291900853273d019fcc07246",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 2 (408.823384ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-867000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-867000 logs -n 25: (3.460742906s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-724000                                | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-724000                                | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-724000                                | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	| delete  | -p embed-certs-724000                                | embed-certs-724000           | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	| delete  | -p                                                   | disable-driver-mounts-170000 | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:36 PST |
	|         | disable-driver-mounts-170000                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:36 PST | 28 Jan 23 11:37 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:37 PST | 28 Jan 23 11:37 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:37 PST | 28 Jan 23 11:38 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-218000     | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:38 PST | 28 Jan 23 11:38 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:38 PST | 28 Jan 23 11:43 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:43 PST | 28 Jan 23 11:43 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:43 PST | 28 Jan 23 11:43 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:43 PST | 28 Jan 23 11:43 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:43 PST | 28 Jan 23 11:43 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-218000 | jenkins | v1.29.0 | 28 Jan 23 11:43 PST | 28 Jan 23 11:43 PST |
	|         | default-k8s-diff-port-218000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-047000 --memory=2200 --alsologtostderr | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:43 PST | 28 Jan 23 11:44 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-047000           | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:44 PST | 28 Jan 23 11:44 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-047000                                 | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:44 PST | 28 Jan 23 11:44 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-047000                | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:44 PST | 28 Jan 23 11:44 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-047000 --memory=2200 --alsologtostderr | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:44 PST | 28 Jan 23 11:45 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-047000 sudo                            | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:45 PST | 28 Jan 23 11:45 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-047000                                 | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:45 PST | 28 Jan 23 11:45 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-047000                                 | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:45 PST | 28 Jan 23 11:45 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-047000                                 | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:45 PST | 28 Jan 23 11:45 PST |
	| delete  | -p newest-cni-047000                                 | newest-cni-047000            | jenkins | v1.29.0 | 28 Jan 23 11:45 PST | 28 Jan 23 11:45 PST |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 11:44:35
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 11:44:35.411344   24581 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:44:35.411541   24581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:44:35.411547   24581 out.go:309] Setting ErrFile to fd 2...
	I0128 11:44:35.411565   24581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:44:35.411685   24581 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 11:44:35.412207   24581 out.go:303] Setting JSON to false
	I0128 11:44:35.431379   24581 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6250,"bootTime":1674928825,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 11:44:35.431464   24581 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:44:35.454034   24581 out.go:177] * [newest-cni-047000] minikube v1.29.0 on Darwin 13.2
	I0128 11:44:35.497281   24581 notify.go:220] Checking for updates...
	I0128 11:44:35.518381   24581 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:44:35.539401   24581 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:44:35.560236   24581 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:44:35.581507   24581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:44:35.602781   24581 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 11:44:35.624625   24581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:44:35.646327   24581 config.go:180] Loaded profile config "newest-cni-047000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:44:35.646995   24581 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:44:35.709022   24581 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:44:35.709155   24581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:44:35.854926   24581 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 19:44:35.689358848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:44:35.898527   24581 out.go:177] * Using the docker driver based on existing profile
	I0128 11:44:35.919539   24581 start.go:296] selected driver: docker
	I0128 11:44:35.919596   24581 start.go:857] validating driver "docker" against &{Name:newest-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-047000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:44:35.919751   24581 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:44:35.924241   24581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:44:36.068499   24581 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 19:44:35.905013786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:44:36.068660   24581 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0128 11:44:36.068678   24581 cni.go:84] Creating CNI manager for ""
	I0128 11:44:36.068690   24581 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:44:36.068702   24581 start_flags.go:319] config:
	{Name:newest-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-047000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:44:36.112118   24581 out.go:177] * Starting control plane node newest-cni-047000 in cluster newest-cni-047000
	I0128 11:44:36.133554   24581 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:44:36.155481   24581 out.go:177] * Pulling base image ...
	I0128 11:44:36.197410   24581 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:44:36.197411   24581 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:44:36.197512   24581 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 11:44:36.197530   24581 cache.go:57] Caching tarball of preloaded images
	I0128 11:44:36.197746   24581 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:44:36.197773   24581 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 11:44:36.198864   24581 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/config.json ...
	I0128 11:44:36.257745   24581 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:44:36.257760   24581 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:44:36.257882   24581 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:44:36.258042   24581 start.go:364] acquiring machines lock for newest-cni-047000: {Name:mk1278a498e13fbcfa290363ee473050c4e6abfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:44:36.258172   24581 start.go:368] acquired machines lock for "newest-cni-047000" in 99.629µs
	I0128 11:44:36.258195   24581 start.go:96] Skipping create...Using existing machine configuration
	I0128 11:44:36.258205   24581 fix.go:55] fixHost starting: 
	I0128 11:44:36.258500   24581 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0128 11:44:36.319743   24581 fix.go:103] recreateIfNeeded on newest-cni-047000: state=Stopped err=<nil>
	W0128 11:44:36.319782   24581 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 11:44:36.362180   24581 out.go:177] * Restarting existing docker container for "newest-cni-047000" ...
	I0128 11:44:36.383538   24581 cli_runner.go:164] Run: docker start newest-cni-047000
	I0128 11:44:36.742267   24581 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0128 11:44:36.804871   24581 kic.go:426] container "newest-cni-047000" state is running.
	I0128 11:44:36.805467   24581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-047000
	I0128 11:44:36.868138   24581 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/config.json ...
	I0128 11:44:36.868557   24581 machine.go:88] provisioning docker machine ...
	I0128 11:44:36.868588   24581 ubuntu.go:169] provisioning hostname "newest-cni-047000"
	I0128 11:44:36.868674   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:36.939406   24581 main.go:141] libmachine: Using SSH client type: native
	I0128 11:44:36.939628   24581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56822 <nil> <nil>}
	I0128 11:44:36.939641   24581 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-047000 && echo "newest-cni-047000" | sudo tee /etc/hostname
	I0128 11:44:37.093416   24581 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-047000
	
	I0128 11:44:37.093579   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:37.155694   24581 main.go:141] libmachine: Using SSH client type: native
	I0128 11:44:37.155877   24581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56822 <nil> <nil>}
	I0128 11:44:37.155891   24581 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-047000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-047000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-047000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:44:37.291200   24581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:44:37.291219   24581 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2556/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2556/.minikube}
	I0128 11:44:37.291236   24581 ubuntu.go:177] setting up certificates
	I0128 11:44:37.291243   24581 provision.go:83] configureAuth start
	I0128 11:44:37.291331   24581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-047000
	I0128 11:44:37.350897   24581 provision.go:138] copyHostCerts
	I0128 11:44:37.350987   24581 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem, removing ...
	I0128 11:44:37.350996   24581 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem
	I0128 11:44:37.351097   24581 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.pem (1082 bytes)
	I0128 11:44:37.351320   24581 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem, removing ...
	I0128 11:44:37.351328   24581 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem
	I0128 11:44:37.351389   24581 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/cert.pem (1123 bytes)
	I0128 11:44:37.351564   24581 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem, removing ...
	I0128 11:44:37.351569   24581 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem
	I0128 11:44:37.351626   24581 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2556/.minikube/key.pem (1679 bytes)
	I0128 11:44:37.351754   24581 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem org=jenkins.newest-cni-047000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-047000]
	I0128 11:44:37.431766   24581 provision.go:172] copyRemoteCerts
	I0128 11:44:37.431828   24581 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:44:37.431888   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:37.492467   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:37.587280   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:44:37.604480   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0128 11:44:37.621934   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 11:44:37.639669   24581 provision.go:86] duration metric: configureAuth took 348.401605ms
	I0128 11:44:37.639682   24581 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:44:37.639897   24581 config.go:180] Loaded profile config "newest-cni-047000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:44:37.639963   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:37.698234   24581 main.go:141] libmachine: Using SSH client type: native
	I0128 11:44:37.698392   24581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56822 <nil> <nil>}
	I0128 11:44:37.698401   24581 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:44:37.829358   24581 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:44:37.829374   24581 ubuntu.go:71] root file system type: overlay
	I0128 11:44:37.829549   24581 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:44:37.829654   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:37.890372   24581 main.go:141] libmachine: Using SSH client type: native
	I0128 11:44:37.890538   24581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56822 <nil> <nil>}
	I0128 11:44:37.890587   24581 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:44:38.032845   24581 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:44:38.032940   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:38.093690   24581 main.go:141] libmachine: Using SSH client type: native
	I0128 11:44:38.093858   24581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56822 <nil> <nil>}
	I0128 11:44:38.093871   24581 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:44:38.231636   24581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:44:38.231654   24581 machine.go:91] provisioned docker machine in 1.363037669s
	I0128 11:44:38.231663   24581 start.go:300] post-start starting for "newest-cni-047000" (driver="docker")
	I0128 11:44:38.231678   24581 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:44:38.231755   24581 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:44:38.231827   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:38.291693   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:38.387540   24581 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:44:38.391141   24581 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:44:38.391161   24581 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:44:38.391176   24581 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:44:38.391181   24581 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:44:38.391188   24581 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/addons for local assets ...
	I0128 11:44:38.391284   24581 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2556/.minikube/files for local assets ...
	I0128 11:44:38.391438   24581 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem -> 38492.pem in /etc/ssl/certs
	I0128 11:44:38.391601   24581 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:44:38.398908   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:44:38.416553   24581 start.go:303] post-start completed in 184.86088ms
	I0128 11:44:38.416625   24581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:44:38.416695   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:38.475636   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:38.565077   24581 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:44:38.570103   24581 fix.go:57] fixHost completed within 2.311802019s
	I0128 11:44:38.570124   24581 start.go:83] releasing machines lock for "newest-cni-047000", held for 2.311853656s
	I0128 11:44:38.570234   24581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-047000
	I0128 11:44:38.630429   24581 ssh_runner.go:195] Run: cat /version.json
	I0128 11:44:38.630446   24581 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:44:38.630495   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:38.630514   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:38.694502   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:38.695087   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:38.786752   24581 ssh_runner.go:195] Run: systemctl --version
	I0128 11:44:38.846985   24581 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:44:38.852593   24581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:44:38.868395   24581 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:44:38.868502   24581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:44:38.876782   24581 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:44:38.889763   24581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 11:44:38.897702   24581 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0128 11:44:38.897718   24581 start.go:483] detecting cgroup driver to use...
	I0128 11:44:38.897729   24581 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:44:38.897827   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:44:38.911799   24581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:44:38.920609   24581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:44:38.929152   24581 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:44:38.929251   24581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:44:38.938256   24581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:44:38.946650   24581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:44:38.955011   24581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:44:38.963490   24581 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:44:38.971822   24581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:44:38.980436   24581 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:44:38.987763   24581 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:44:38.994915   24581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:44:39.069560   24581 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:44:39.141966   24581 start.go:483] detecting cgroup driver to use...
	I0128 11:44:39.141985   24581 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:44:39.142100   24581 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:44:39.153642   24581 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:44:39.153719   24581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:44:39.165279   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:44:39.179660   24581 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:44:39.292343   24581 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:44:39.388137   24581 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:44:39.388156   24581 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:44:39.403218   24581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:44:39.486917   24581 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:44:39.743333   24581 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:44:39.809839   24581 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:44:39.883293   24581 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:44:39.953207   24581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:44:40.017441   24581 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:44:40.038302   24581 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:44:40.038393   24581 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:44:40.042856   24581 start.go:551] Will wait 60s for crictl version
	I0128 11:44:40.042919   24581 ssh_runner.go:195] Run: which crictl
	I0128 11:44:40.047114   24581 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:44:40.164251   24581 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:44:40.164339   24581 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:44:40.193924   24581 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:44:40.246108   24581 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:44:40.246296   24581 cli_runner.go:164] Run: docker exec -t newest-cni-047000 dig +short host.docker.internal
	I0128 11:44:40.362462   24581 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:44:40.362576   24581 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:44:40.367005   24581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:44:40.377413   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:40.466976   24581 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0128 11:44:40.489326   24581 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:44:40.489488   24581 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:44:40.516024   24581 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:44:40.516040   24581 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:44:40.516132   24581 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:44:40.541006   24581 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:44:40.541024   24581 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:44:40.541207   24581 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:44:40.613326   24581 cni.go:84] Creating CNI manager for ""
	I0128 11:44:40.613343   24581 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:44:40.613358   24581 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0128 11:44:40.613373   24581 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-047000 NodeName:newest-cni-047000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:44:40.613515   24581 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-047000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:44:40.613611   24581 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-047000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-047000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:44:40.613682   24581 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:44:40.621874   24581 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:44:40.621993   24581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:44:40.629833   24581 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0128 11:44:40.643486   24581 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:44:40.656627   24581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0128 11:44:40.670612   24581 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:44:40.674864   24581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:44:40.684975   24581 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000 for IP: 192.168.67.2
	I0128 11:44:40.684993   24581 certs.go:186] acquiring lock for shared ca certs: {Name:mkee0a6d4b79657122da9b64494daa75cd779ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:44:40.685182   24581 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key
	I0128 11:44:40.685257   24581 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key
	I0128 11:44:40.685416   24581 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/client.key
	I0128 11:44:40.685553   24581 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/apiserver.key.c7fa3a9e
	I0128 11:44:40.685641   24581 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/proxy-client.key
	I0128 11:44:40.685875   24581 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem (1338 bytes)
	W0128 11:44:40.685913   24581 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849_empty.pem, impossibly tiny 0 bytes
	I0128 11:44:40.685924   24581 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 11:44:40.685964   24581 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:44:40.686001   24581 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:44:40.686032   24581 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/certs/key.pem (1679 bytes)
	I0128 11:44:40.686106   24581 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem (1708 bytes)
	I0128 11:44:40.686700   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:44:40.704997   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 11:44:40.723291   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:44:40.741527   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/newest-cni-047000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0128 11:44:40.761479   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:44:40.780376   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 11:44:40.799123   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:44:40.818091   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 11:44:40.836685   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/certs/3849.pem --> /usr/share/ca-certificates/3849.pem (1338 bytes)
	I0128 11:44:40.854885   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/ssl/certs/38492.pem --> /usr/share/ca-certificates/38492.pem (1708 bytes)
	I0128 11:44:40.872625   24581 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2556/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:44:40.889926   24581 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 11:44:40.903328   24581 ssh_runner.go:195] Run: openssl version
	I0128 11:44:40.909079   24581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3849.pem && ln -fs /usr/share/ca-certificates/3849.pem /etc/ssl/certs/3849.pem"
	I0128 11:44:40.917875   24581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3849.pem
	I0128 11:44:40.922063   24581 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:26 /usr/share/ca-certificates/3849.pem
	I0128 11:44:40.922111   24581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3849.pem
	I0128 11:44:40.927825   24581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3849.pem /etc/ssl/certs/51391683.0"
	I0128 11:44:40.935676   24581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38492.pem && ln -fs /usr/share/ca-certificates/38492.pem /etc/ssl/certs/38492.pem"
	I0128 11:44:40.944254   24581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38492.pem
	I0128 11:44:40.948607   24581 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:26 /usr/share/ca-certificates/38492.pem
	I0128 11:44:40.948655   24581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38492.pem
	I0128 11:44:40.954688   24581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38492.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:44:40.962361   24581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:44:40.970990   24581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:44:40.975124   24581 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:44:40.975178   24581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:44:40.980938   24581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:44:40.989466   24581 kubeadm.go:401] StartCluster: {Name:newest-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-047000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:44:40.989610   24581 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:44:41.012788   24581 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:44:41.021181   24581 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:44:41.021196   24581 kubeadm.go:633] restartCluster start
	I0128 11:44:41.021248   24581 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:44:41.028476   24581 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:41.028578   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:41.089595   24581 kubeconfig.go:135] verify returned: extract IP: "newest-cni-047000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:44:41.089796   24581 kubeconfig.go:146] "newest-cni-047000" context is missing from /Users/jenkins/minikube-integration/15565-2556/kubeconfig - will repair!
	I0128 11:44:41.090139   24581 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/kubeconfig: {Name:mk9285754a110019f97a480561fbfd0056cc86f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:44:41.091485   24581 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:44:41.099672   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:41.099761   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:41.108744   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:41.609225   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:41.609436   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:41.620120   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:42.110919   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:42.111082   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:42.122373   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:42.609012   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:42.609137   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:42.619704   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:43.109407   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:43.109642   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:43.120794   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:43.609883   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:43.610101   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:43.621157   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:44.110357   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:44.110489   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:44.121767   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:44.609197   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:44.609330   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:44.618745   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:45.110244   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:45.110379   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:45.121610   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:45.610097   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:45.610318   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:45.621469   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:46.109447   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:46.109570   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:46.120396   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:46.610420   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:46.610618   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:46.621332   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:47.109836   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:47.109922   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:47.120598   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:47.609204   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:47.609317   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:47.620270   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:48.111084   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:48.111238   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:48.122234   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:48.611096   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:48.611246   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:48.622468   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:49.109410   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:49.109526   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:49.120642   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:49.611093   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:49.611186   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:49.621464   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:50.111181   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:50.111431   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:50.122543   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:50.609129   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:50.609240   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:50.620375   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:51.111159   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:51.111395   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:51.122606   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:51.122616   24581 api_server.go:165] Checking apiserver status ...
	I0128 11:44:51.122669   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:44:51.130998   24581 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:51.131010   24581 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0128 11:44:51.131019   24581 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:44:51.131089   24581 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:44:51.155477   24581 docker.go:456] Stopping containers: [b5d7e2be1d41 34c0c82610ae 9ee5b7501e53 47f6b6b061f7 779b933f8892 07662220954e db56d25edd36 3e126f495485 eb38c22037df 05e3d0ad92e0 d864342bdd9d da9eaea6213b 76a1e60f21f5 f0a041c1e5c4 71e87302b548 f972e403732b]
	I0128 11:44:51.155568   24581 ssh_runner.go:195] Run: docker stop b5d7e2be1d41 34c0c82610ae 9ee5b7501e53 47f6b6b061f7 779b933f8892 07662220954e db56d25edd36 3e126f495485 eb38c22037df 05e3d0ad92e0 d864342bdd9d da9eaea6213b 76a1e60f21f5 f0a041c1e5c4 71e87302b548 f972e403732b
	I0128 11:44:51.179268   24581 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:44:51.189992   24581 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:44:51.197785   24581 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 28 19:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 28 19:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan 28 19:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 28 19:43 /etc/kubernetes/scheduler.conf
	
	I0128 11:44:51.197847   24581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:44:51.205438   24581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:44:51.212945   24581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:44:51.220803   24581 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:51.220854   24581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0128 11:44:51.228528   24581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:44:51.236362   24581 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:44:51.236420   24581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0128 11:44:51.244044   24581 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:44:51.252694   24581 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:44:51.252711   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:44:51.309622   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:44:52.048337   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:44:52.196651   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:44:52.263467   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:44:52.372803   24581 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:44:52.372877   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:44:52.886585   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:44:53.386581   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:44:53.459066   24581 api_server.go:71] duration metric: took 1.086235485s to wait for apiserver process to appear ...
	I0128 11:44:53.459100   24581 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:44:53.459141   24581 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56821/healthz ...
	I0128 11:44:53.460842   24581 api_server.go:268] stopped: https://127.0.0.1:56821/healthz: Get "https://127.0.0.1:56821/healthz": EOF
	I0128 11:44:53.961671   24581 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56821/healthz ...
	I0128 11:44:56.173069   24581 api_server.go:278] https://127.0.0.1:56821/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:44:56.173111   24581 api_server.go:102] status: https://127.0.0.1:56821/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:44:56.462989   24581 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56821/healthz ...
	I0128 11:44:56.469473   24581 api_server.go:278] https://127.0.0.1:56821/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:44:56.469489   24581 api_server.go:102] status: https://127.0.0.1:56821/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:44:56.963120   24581 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56821/healthz ...
	I0128 11:44:56.968814   24581 api_server.go:278] https://127.0.0.1:56821/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:44:56.968827   24581 api_server.go:102] status: https://127.0.0.1:56821/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:44:57.461382   24581 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56821/healthz ...
	I0128 11:44:57.466830   24581 api_server.go:278] https://127.0.0.1:56821/healthz returned 200:
	ok
	I0128 11:44:57.473777   24581 api_server.go:140] control plane version: v1.26.1
	I0128 11:44:57.473797   24581 api_server.go:130] duration metric: took 4.014580534s to wait for apiserver health ...
	I0128 11:44:57.473805   24581 cni.go:84] Creating CNI manager for ""
	I0128 11:44:57.473819   24581 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:44:57.511107   24581 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0128 11:44:57.547276   24581 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0128 11:44:57.559973   24581 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0128 11:44:57.576960   24581 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:44:57.585533   24581 system_pods.go:59] 9 kube-system pods found
	I0128 11:44:57.585553   24581 system_pods.go:61] "coredns-787d4945fb-2cj54" [bfd1c2b9-18c6-492e-9874-f3a8b54b48f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0128 11:44:57.585558   24581 system_pods.go:61] "coredns-787d4945fb-d69k5" [361b3d7d-12b0-45f6-ac1d-bbdf1747f530] Running
	I0128 11:44:57.585561   24581 system_pods.go:61] "etcd-newest-cni-047000" [0a474f7b-449e-4d19-a142-946d8a65dccf] Running
	I0128 11:44:57.585565   24581 system_pods.go:61] "kube-apiserver-newest-cni-047000" [c917f122-c2a4-41bd-aa51-feaa62e32ca6] Running
	I0128 11:44:57.585569   24581 system_pods.go:61] "kube-controller-manager-newest-cni-047000" [8f29d7da-87a5-43db-880a-2b5a6092de7b] Running
	I0128 11:44:57.585579   24581 system_pods.go:61] "kube-proxy-spbz2" [7116e0f7-800f-4d9e-b25a-ece350c9033c] Running
	I0128 11:44:57.585584   24581 system_pods.go:61] "kube-scheduler-newest-cni-047000" [a183c4cc-0a44-4724-87c4-c2f25536fb44] Running
	I0128 11:44:57.585588   24581 system_pods.go:61] "metrics-server-7997d45854-rl97r" [f424f950-1331-4f0d-889b-1134c3cdfd96] Pending
	I0128 11:44:57.585593   24581 system_pods.go:61] "storage-provisioner" [a6e0d499-c89a-4047-aa1e-48ba6684e4ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0128 11:44:57.585598   24581 system_pods.go:74] duration metric: took 8.625414ms to wait for pod list to return data ...
	I0128 11:44:57.585604   24581 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:44:57.589475   24581 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0128 11:44:57.589499   24581 node_conditions.go:123] node cpu capacity is 6
	I0128 11:44:57.589511   24581 node_conditions.go:105] duration metric: took 3.904219ms to run NodePressure ...
	I0128 11:44:57.589523   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:44:58.071139   24581 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 11:44:58.081139   24581 ops.go:34] apiserver oom_adj: -16
	I0128 11:44:58.081154   24581 kubeadm.go:637] restartCluster took 17.059487486s
	I0128 11:44:58.081165   24581 kubeadm.go:403] StartCluster complete in 17.091236834s
	I0128 11:44:58.081178   24581 settings.go:142] acquiring lock: {Name:mkfe63daf2cbfdaa44c3edb51b8dcbfb26a764e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:44:58.081267   24581 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 11:44:58.081915   24581 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/kubeconfig: {Name:mk9285754a110019f97a480561fbfd0056cc86f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:44:58.082179   24581 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 11:44:58.082213   24581 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0128 11:44:58.082295   24581 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-047000"
	I0128 11:44:58.082323   24581 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-047000"
	I0128 11:44:58.082334   24581 addons.go:65] Setting default-storageclass=true in profile "newest-cni-047000"
	W0128 11:44:58.082343   24581 addons.go:236] addon storage-provisioner should already be in state true
	I0128 11:44:58.082356   24581 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-047000"
	I0128 11:44:58.082389   24581 host.go:66] Checking if "newest-cni-047000" exists ...
	I0128 11:44:58.082387   24581 addons.go:65] Setting dashboard=true in profile "newest-cni-047000"
	I0128 11:44:58.082414   24581 config.go:180] Loaded profile config "newest-cni-047000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:44:58.082412   24581 addons.go:65] Setting metrics-server=true in profile "newest-cni-047000"
	I0128 11:44:58.082429   24581 addons.go:227] Setting addon dashboard=true in "newest-cni-047000"
	I0128 11:44:58.082454   24581 addons.go:227] Setting addon metrics-server=true in "newest-cni-047000"
	W0128 11:44:58.082462   24581 addons.go:236] addon metrics-server should already be in state true
	W0128 11:44:58.082462   24581 addons.go:236] addon dashboard should already be in state true
	I0128 11:44:58.082502   24581 host.go:66] Checking if "newest-cni-047000" exists ...
	I0128 11:44:58.082564   24581 host.go:66] Checking if "newest-cni-047000" exists ...
	I0128 11:44:58.082765   24581 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0128 11:44:58.082906   24581 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0128 11:44:58.083637   24581 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0128 11:44:58.083831   24581 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0128 11:44:58.096094   24581 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-047000" context rescaled to 1 replicas
	I0128 11:44:58.096140   24581 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:44:58.119718   24581 out.go:177] * Verifying Kubernetes components...
	I0128 11:44:58.177651   24581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:44:58.217868   24581 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0128 11:44:58.254993   24581 addons.go:227] Setting addon default-storageclass=true in "newest-cni-047000"
	I0128 11:44:58.275415   24581 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 11:44:58.312540   24581 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0128 11:44:58.333874   24581 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	W0128 11:44:58.333894   24581 addons.go:236] addon default-storageclass should already be in state true
	I0128 11:44:58.371489   24581 host.go:66] Checking if "newest-cni-047000" exists ...
	I0128 11:44:58.371510   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0128 11:44:58.408933   24581 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:44:58.429637   24581 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0128 11:44:58.429648   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 11:44:58.429652   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0128 11:44:58.429648   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0128 11:44:58.429746   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:58.429763   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:58.429765   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:58.434993   24581 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0128 11:44:58.440314   24581 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0128 11:44:58.440307   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:58.531900   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:58.532429   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:58.535160   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:58.536136   24581 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 11:44:58.536148   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 11:44:58.536279   24581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0128 11:44:58.538319   24581 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:44:58.538398   24581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:44:58.568526   24581 api_server.go:71] duration metric: took 472.336602ms to wait for apiserver process to appear ...
	I0128 11:44:58.568556   24581 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:44:58.568572   24581 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56821/healthz ...
	I0128 11:44:58.575762   24581 api_server.go:278] https://127.0.0.1:56821/healthz returned 200:
	ok
	I0128 11:44:58.577710   24581 api_server.go:140] control plane version: v1.26.1
	I0128 11:44:58.577727   24581 api_server.go:130] duration metric: took 9.161294ms to wait for apiserver health ...
	I0128 11:44:58.577745   24581 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:44:58.586610   24581 system_pods.go:59] 9 kube-system pods found
	I0128 11:44:58.586636   24581 system_pods.go:61] "coredns-787d4945fb-2cj54" [bfd1c2b9-18c6-492e-9874-f3a8b54b48f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0128 11:44:58.586647   24581 system_pods.go:61] "coredns-787d4945fb-d69k5" [361b3d7d-12b0-45f6-ac1d-bbdf1747f530] Running
	I0128 11:44:58.586664   24581 system_pods.go:61] "etcd-newest-cni-047000" [0a474f7b-449e-4d19-a142-946d8a65dccf] Running
	I0128 11:44:58.586678   24581 system_pods.go:61] "kube-apiserver-newest-cni-047000" [c917f122-c2a4-41bd-aa51-feaa62e32ca6] Running
	I0128 11:44:58.586686   24581 system_pods.go:61] "kube-controller-manager-newest-cni-047000" [8f29d7da-87a5-43db-880a-2b5a6092de7b] Running
	I0128 11:44:58.586694   24581 system_pods.go:61] "kube-proxy-spbz2" [7116e0f7-800f-4d9e-b25a-ece350c9033c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0128 11:44:58.586702   24581 system_pods.go:61] "kube-scheduler-newest-cni-047000" [a183c4cc-0a44-4724-87c4-c2f25536fb44] Running
	I0128 11:44:58.586712   24581 system_pods.go:61] "metrics-server-7997d45854-rl97r" [f424f950-1331-4f0d-889b-1134c3cdfd96] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0128 11:44:58.586722   24581 system_pods.go:61] "storage-provisioner" [a6e0d499-c89a-4047-aa1e-48ba6684e4ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0128 11:44:58.586730   24581 system_pods.go:74] duration metric: took 8.973904ms to wait for pod list to return data ...
	I0128 11:44:58.586741   24581 default_sa.go:34] waiting for default service account to be created ...
	I0128 11:44:58.590780   24581 default_sa.go:45] found service account: "default"
	I0128 11:44:58.590798   24581 default_sa.go:55] duration metric: took 4.05144ms for default service account to be created ...
	I0128 11:44:58.590809   24581 kubeadm.go:578] duration metric: took 494.626587ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0128 11:44:58.590824   24581 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:44:58.595578   24581 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0128 11:44:58.595600   24581 node_conditions.go:123] node cpu capacity is 6
	I0128 11:44:58.595610   24581 node_conditions.go:105] duration metric: took 4.778827ms to run NodePressure ...
	I0128 11:44:58.595625   24581 start.go:228] waiting for startup goroutines ...
	I0128 11:44:58.612788   24581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56822 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0128 11:44:58.668888   24581 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0128 11:44:58.668916   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0128 11:44:58.671715   24581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:44:58.687691   24581 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0128 11:44:58.687706   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0128 11:44:58.756180   24581 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0128 11:44:58.756197   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0128 11:44:58.771827   24581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 11:44:58.777597   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0128 11:44:58.777613   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0128 11:44:58.783390   24581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0128 11:44:58.855090   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0128 11:44:58.855108   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0128 11:44:59.057343   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0128 11:44:59.057362   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0128 11:44:59.077350   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0128 11:44:59.077367   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0128 11:44:59.165814   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0128 11:44:59.165832   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0128 11:44:59.184147   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0128 11:44:59.184162   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0128 11:44:59.253982   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0128 11:44:59.253999   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0128 11:44:59.273323   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0128 11:44:59.273340   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0128 11:44:59.293042   24581 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0128 11:44:59.293063   24581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0128 11:44:59.371602   24581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0128 11:44:59.983304   24581 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.311529078s)
	I0128 11:44:59.983350   24581 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.211447253s)
	I0128 11:44:59.994048   24581 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.210592777s)
	I0128 11:44:59.994080   24581 addons.go:457] Verifying addon metrics-server=true in "newest-cni-047000"
	I0128 11:45:00.185286   24581 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-047000 addons enable metrics-server	
	
	
	I0128 11:45:00.259730   24581 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0128 11:45:00.333465   24581 addons.go:492] enable addons completed in 2.25118676s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0128 11:45:00.333493   24581 start.go:233] waiting for cluster config update ...
	I0128 11:45:00.333513   24581 start.go:240] writing updated cluster config ...
	I0128 11:45:00.333903   24581 ssh_runner.go:195] Run: rm -f paused
	I0128 11:45:00.377567   24581 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0128 11:45:00.398773   24581 out.go:177] * Done! kubectl is now configured to use "newest-cni-047000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 19:19:39 UTC, end at Sat 2023-01-28 19:46:41 UTC. --
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.043684247Z" level=info msg="Processing signal 'terminated'"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044647330Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044903083Z" level=info msg="Daemon shutdown complete"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[436]: time="2023-01-28T19:19:42.044949007Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: docker.service: Succeeded.
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.088592507Z" level=info msg="Starting up"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090253834Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090291624Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090307486Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.090315468Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091569449Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091611067Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091623295Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.091629230Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.098516161Z" level=info msg="Loading containers: start."
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.175682495Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.208852655Z" level=info msg="Loading containers: done."
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.216989221Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.217057643Z" level=info msg="Daemon has completed initialization"
	Jan 28 19:19:42 old-k8s-version-867000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.241129214Z" level=info msg="API listen on [::]:2376"
	Jan 28 19:19:42 old-k8s-version-867000 dockerd[622]: time="2023-01-28T19:19:42.244202317Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-01-28T19:46:44Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jan28 18:55] hrtimer: interrupt took 1291156 ns
	
	* 
	* ==> kernel <==
	*  19:46:44 up  1:45,  0 users,  load average: 0.68, 1.06, 1.06
	Linux old-k8s-version-867000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 19:19:39 UTC, end at Sat 2023-01-28 19:46:44 UTC. --
	Jan 28 19:46:42 old-k8s-version-867000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35050]: I0128 19:46:43.072376   35050 server.go:410] Version: v1.16.0
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35050]: I0128 19:46:43.072698   35050 plugins.go:100] No cloud provider specified.
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35050]: I0128 19:46:43.072752   35050 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35050]: I0128 19:46:43.074920   35050 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35050]: W0128 19:46:43.075672   35050 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35050]: W0128 19:46:43.075741   35050 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35050]: F0128 19:46:43.075767   35050 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:46:43 old-k8s-version-867000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:46:43 old-k8s-version-867000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:46:43 old-k8s-version-867000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jan 28 19:46:43 old-k8s-version-867000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:46:43 old-k8s-version-867000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35063]: I0128 19:46:43.827373   35063 server.go:410] Version: v1.16.0
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35063]: I0128 19:46:43.827821   35063 plugins.go:100] No cloud provider specified.
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35063]: I0128 19:46:43.827883   35063 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35063]: I0128 19:46:43.829601   35063 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35063]: W0128 19:46:43.830276   35063 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35063]: W0128 19:46:43.830341   35063 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:46:43 old-k8s-version-867000 kubelet[35063]: F0128 19:46:43.830367   35063 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:46:43 old-k8s-version-867000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:46:43 old-k8s-version-867000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:46:44 old-k8s-version-867000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Jan 28 19:46:44 old-k8s-version-867000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:46:44 old-k8s-version-867000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:46:44.243047   24912 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 2 (404.924862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-867000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.80s)

                                                
                                    

Test pass (272/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.65
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.26.1/json-events 7.18
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.32
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 12.2
19 TestBinaryMirror 1.68
20 TestOffline 54.03
22 TestAddons/Setup 148.3
26 TestAddons/parallel/MetricsServer 5.73
27 TestAddons/parallel/HelmTiller 10.94
29 TestAddons/parallel/CSI 43.55
30 TestAddons/parallel/Headlamp 10.51
31 TestAddons/parallel/CloudSpanner 5.48
34 TestAddons/serial/GCPAuth/Namespaces 0.18
35 TestAddons/StoppedEnableDisable 11.58
36 TestCertOptions 35.99
37 TestCertExpiration 252.02
38 TestDockerFlags 37.22
39 TestForceSystemdFlag 38.47
40 TestForceSystemdEnv 37.44
42 TestHyperKitDriverInstallOrUpdate 6.04
45 TestErrorSpam/setup 32.01
46 TestErrorSpam/start 2.39
47 TestErrorSpam/status 1.29
48 TestErrorSpam/pause 1.84
49 TestErrorSpam/unpause 2.05
50 TestErrorSpam/stop 11.56
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 46.27
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 32
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 7.65
62 TestFunctional/serial/CacheCmd/cache/add_local 1.72
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.1
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.45
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.88
67 TestFunctional/serial/CacheCmd/cache/delete 0.17
68 TestFunctional/serial/MinikubeKubectlCmd 0.57
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
70 TestFunctional/serial/ExtraConfig 38.6
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 3.28
73 TestFunctional/serial/LogsFileCmd 3.28
76 TestFunctional/parallel/DashboardCmd 13.73
77 TestFunctional/parallel/DryRun 1.88
78 TestFunctional/parallel/InternationalLanguage 0.73
79 TestFunctional/parallel/StatusCmd 1.34
82 TestFunctional/parallel/ServiceCmd 14.01
84 TestFunctional/parallel/AddonsCmd 0.29
85 TestFunctional/parallel/PersistentVolumeClaim 28.72
87 TestFunctional/parallel/SSHCmd 0.88
88 TestFunctional/parallel/CpCmd 1.79
89 TestFunctional/parallel/MySQL 23.27
90 TestFunctional/parallel/FileSync 0.45
91 TestFunctional/parallel/CertSync 2.71
95 TestFunctional/parallel/NodeLabels 0.05
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
99 TestFunctional/parallel/License 0.37
101 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
103 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.26
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
105 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
109 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
110 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
111 TestFunctional/parallel/ProfileCmd/profile_list 0.54
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
113 TestFunctional/parallel/MountCmd/any-port 9.87
114 TestFunctional/parallel/MountCmd/specific-port 2.93
115 TestFunctional/parallel/Version/short 0.12
116 TestFunctional/parallel/Version/components 1.15
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.44
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.4
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.41
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.55
122 TestFunctional/parallel/ImageCommands/Setup 2.61
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.17
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.82
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.93
126 TestFunctional/parallel/DockerEnv/bash 1.79
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.44
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.29
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.96
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.01
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.21
134 TestFunctional/delete_addon-resizer_images 0.16
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
140 TestImageBuild/serial/NormalBuild 2.22
141 TestImageBuild/serial/BuildWithBuildArg 0.96
142 TestImageBuild/serial/BuildWithDockerIgnore 0.49
143 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.42
153 TestJSONOutput/start/Command 55.18
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 0.64
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.6
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 5.83
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.76
178 TestKicCustomNetwork/create_custom_network 31.98
179 TestKicCustomNetwork/use_default_bridge_network 36.56
180 TestKicExistingNetwork 38.16
181 TestKicCustomSubnet 36.39
182 TestKicStaticIP 32.62
183 TestMainNoArgs 0.08
184 TestMinikubeProfile 69.2
187 TestMountStart/serial/StartWithMountFirst 8.24
188 TestMountStart/serial/VerifyMountFirst 0.41
189 TestMountStart/serial/StartWithMountSecond 8.26
190 TestMountStart/serial/VerifyMountSecond 0.42
191 TestMountStart/serial/DeleteFirst 2.15
192 TestMountStart/serial/VerifyMountPostDelete 0.41
193 TestMountStart/serial/Stop 1.57
194 TestMountStart/serial/RestartStopped 5.88
195 TestMountStart/serial/VerifyMountPostStop 0.41
198 TestMultiNode/serial/FreshStart2Nodes 79.06
199 TestMultiNode/serial/DeployApp2Nodes 9.59
200 TestMultiNode/serial/PingHostFrom2Pods 0.95
201 TestMultiNode/serial/AddNode 27.33
202 TestMultiNode/serial/ProfileList 0.52
203 TestMultiNode/serial/CopyFile 15.26
204 TestMultiNode/serial/StopNode 3.14
205 TestMultiNode/serial/StartAfterStop 10.52
206 TestMultiNode/serial/RestartKeepsNodes 83.4
207 TestMultiNode/serial/DeleteNode 6.33
208 TestMultiNode/serial/StopMultiNode 22.01
209 TestMultiNode/serial/RestartMultiNode 52.72
210 TestMultiNode/serial/ValidateNameConflict 38
214 TestPreload 120.22
216 TestScheduledStopUnix 107.79
217 TestSkaffold 60.42
219 TestInsufficientStorage 14.57
235 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 6.95
237 TestStoppedBinaryUpgrade/Setup 0.55
239 TestStoppedBinaryUpgrade/MinikubeLogs 3.54
241 TestPause/serial/Start 54.58
242 TestPause/serial/SecondStartNoReconfiguration 54.62
243 TestPause/serial/Pause 0.75
244 TestPause/serial/VerifyStatus 0.46
245 TestPause/serial/Unpause 0.95
246 TestPause/serial/PauseAgain 1.03
247 TestPause/serial/DeletePaused 2.95
248 TestPause/serial/VerifyDeletedResources 0.64
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.42
258 TestNoKubernetes/serial/StartWithK8s 34.33
259 TestNoKubernetes/serial/StartWithStopK8s 17.71
260 TestNoKubernetes/serial/Start 7.36
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
262 TestNoKubernetes/serial/ProfileList 35.01
263 TestNoKubernetes/serial/Stop 1.61
264 TestNoKubernetes/serial/StartNoArgs 4.98
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
266 TestNetworkPlugins/group/auto/Start 54.92
267 TestNetworkPlugins/group/auto/KubeletFlags 0.42
268 TestNetworkPlugins/group/auto/NetCatPod 14.22
269 TestNetworkPlugins/group/auto/DNS 0.13
270 TestNetworkPlugins/group/auto/Localhost 0.12
271 TestNetworkPlugins/group/auto/HairPin 0.12
272 TestNetworkPlugins/group/calico/Start 74.06
273 TestNetworkPlugins/group/calico/ControllerPod 5.02
274 TestNetworkPlugins/group/calico/KubeletFlags 0.44
275 TestNetworkPlugins/group/calico/NetCatPod 19.25
276 TestNetworkPlugins/group/calico/DNS 0.14
277 TestNetworkPlugins/group/calico/Localhost 0.13
278 TestNetworkPlugins/group/calico/HairPin 0.11
279 TestNetworkPlugins/group/custom-flannel/Start 68.4
280 TestNetworkPlugins/group/false/Start 53.82
281 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.49
282 TestNetworkPlugins/group/custom-flannel/NetCatPod 19.25
283 TestNetworkPlugins/group/false/KubeletFlags 0.5
284 TestNetworkPlugins/group/false/NetCatPod 15.2
285 TestNetworkPlugins/group/custom-flannel/DNS 0.13
286 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
287 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
288 TestNetworkPlugins/group/false/DNS 0.15
289 TestNetworkPlugins/group/false/Localhost 0.13
290 TestNetworkPlugins/group/false/HairPin 0.11
291 TestNetworkPlugins/group/kindnet/Start 53.04
292 TestNetworkPlugins/group/flannel/Start 62.65
293 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
295 TestNetworkPlugins/group/kindnet/NetCatPod 15.24
296 TestNetworkPlugins/group/flannel/ControllerPod 5.01
297 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
298 TestNetworkPlugins/group/kindnet/DNS 0.14
299 TestNetworkPlugins/group/kindnet/Localhost 0.12
300 TestNetworkPlugins/group/kindnet/HairPin 0.12
301 TestNetworkPlugins/group/flannel/NetCatPod 20.22
302 TestNetworkPlugins/group/flannel/DNS 0.15
303 TestNetworkPlugins/group/flannel/Localhost 0.13
304 TestNetworkPlugins/group/flannel/HairPin 0.15
305 TestNetworkPlugins/group/enable-default-cni/Start 54.25
306 TestNetworkPlugins/group/bridge/Start 48.8
307 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
308 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.21
309 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
310 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
311 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
312 TestNetworkPlugins/group/bridge/KubeletFlags 0.44
313 TestNetworkPlugins/group/bridge/NetCatPod 20.2
314 TestNetworkPlugins/group/bridge/DNS 0.15
315 TestNetworkPlugins/group/bridge/Localhost 0.15
316 TestNetworkPlugins/group/bridge/HairPin 0.13
317 TestNetworkPlugins/group/kubenet/Start 53.18
320 TestNetworkPlugins/group/kubenet/KubeletFlags 0.42
321 TestNetworkPlugins/group/kubenet/NetCatPod 19.22
322 TestNetworkPlugins/group/kubenet/DNS 0.11
323 TestNetworkPlugins/group/kubenet/Localhost 0.12
324 TestNetworkPlugins/group/kubenet/HairPin 0.11
326 TestStartStop/group/no-preload/serial/FirstStart 62.99
327 TestStartStop/group/no-preload/serial/DeployApp 13.28
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
329 TestStartStop/group/no-preload/serial/Stop 11.14
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.4
331 TestStartStop/group/no-preload/serial/SecondStart 557.71
334 TestStartStop/group/old-k8s-version/serial/Stop 1.59
335 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.5
340 TestStartStop/group/no-preload/serial/Pause 3.44
342 TestStartStop/group/embed-certs/serial/FirstStart 47.15
343 TestStartStop/group/embed-certs/serial/DeployApp 9.29
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.83
345 TestStartStop/group/embed-certs/serial/Stop 10.98
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.4
347 TestStartStop/group/embed-certs/serial/SecondStart 556.72
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.45
352 TestStartStop/group/embed-certs/serial/Pause 3.39
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.12
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.86
358 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.04
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.4
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 308.68
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.46
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.39
366 TestStartStop/group/newest-cni/serial/FirstStart 43.13
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
369 TestStartStop/group/newest-cni/serial/Stop 10.99
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
371 TestStartStop/group/newest-cni/serial/SecondStart 25.62
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
375 TestStartStop/group/newest-cni/serial/Pause 3.47
x
+
TestDownloadOnly/v1.16.0/json-events (10.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-289000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-289000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (10.654626253s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-289000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-289000: exit status 85 (300.347641ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-289000 | jenkins | v1.29.0 | 28 Jan 23 10:21 PST |          |
	|         | -p download-only-289000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 10:21:02
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 10:21:02.150179    3851 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:21:02.150356    3851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:21:02.150361    3851 out.go:309] Setting ErrFile to fd 2...
	I0128 10:21:02.150365    3851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:21:02.150473    3851 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	W0128 10:21:02.150574    3851 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-2556/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-2556/.minikube/config/config.json: no such file or directory
	I0128 10:21:02.151300    3851 out.go:303] Setting JSON to true
	I0128 10:21:02.170771    3851 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1237,"bootTime":1674928825,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 10:21:02.170859    3851 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:21:02.193506    3851 out.go:97] [download-only-289000] minikube v1.29.0 on Darwin 13.2
	I0128 10:21:02.193750    3851 notify.go:220] Checking for updates...
	W0128 10:21:02.193756    3851 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball: no such file or directory
	I0128 10:21:02.214641    3851 out.go:169] MINIKUBE_LOCATION=15565
	I0128 10:21:02.236395    3851 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 10:21:02.258555    3851 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:21:02.301500    3851 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:21:02.322535    3851 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	W0128 10:21:02.364677    3851 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0128 10:21:02.365072    3851 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:21:02.425958    3851 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:21:02.426072    3851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:21:02.573510    3851 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-28 18:21:02.476024854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:21:02.594336    3851 out.go:97] Using the docker driver based on user configuration
	I0128 10:21:02.594396    3851 start.go:296] selected driver: docker
	I0128 10:21:02.594432    3851 start.go:857] validating driver "docker" against <nil>
	I0128 10:21:02.594674    3851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:21:02.739857    3851 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-28 18:21:02.644810832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:21:02.739983    3851 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 10:21:02.744033    3851 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0128 10:21:02.744207    3851 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0128 10:21:02.765715    3851 out.go:169] Using Docker Desktop driver with root privileges
	I0128 10:21:02.787235    3851 cni.go:84] Creating CNI manager for ""
	I0128 10:21:02.787258    3851 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 10:21:02.787267    3851 start_flags.go:319] config:
	{Name:download-only-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:21:02.808697    3851 out.go:97] Starting control plane node download-only-289000 in cluster download-only-289000
	I0128 10:21:02.808745    3851 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 10:21:02.830521    3851 out.go:97] Pulling base image ...
	I0128 10:21:02.830651    3851 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:21:02.830769    3851 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 10:21:02.886918    3851 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
	I0128 10:21:02.886953    3851 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 10:21:02.886974    3851 cache.go:57] Caching tarball of preloaded images
	I0128 10:21:02.887168    3851 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:21:02.887196    3851 image.go:61] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local cache directory
	I0128 10:21:02.887336    3851 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
	I0128 10:21:02.910249    3851 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0128 10:21:02.910277    3851 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:21:02.996805    3851 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 10:21:05.433852    3851 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:21:05.434027    3851 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:21:05.980446    3851 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 10:21:05.980672    3851 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/download-only-289000/config.json ...
	I0128 10:21:05.980701    3851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/download-only-289000/config.json: {Name:mkbc155e8801a34bbfe82861846df4b111e7e5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:21:05.980977    3851 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:21:05.981236    3851 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-289000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (7.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-289000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-289000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (7.17623692s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (7.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-289000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-289000: exit status 85 (314.861122ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-289000 | jenkins | v1.29.0 | 28 Jan 23 10:21 PST |          |
	|         | -p download-only-289000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-289000 | jenkins | v1.29.0 | 28 Jan 23 10:21 PST |          |
	|         | -p download-only-289000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 10:21:13
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 10:21:13.107862    3889 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:21:13.108017    3889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:21:13.108022    3889 out.go:309] Setting ErrFile to fd 2...
	I0128 10:21:13.108026    3889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:21:13.108133    3889 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	W0128 10:21:13.108222    3889 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-2556/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-2556/.minikube/config/config.json: no such file or directory
	I0128 10:21:13.108567    3889 out.go:303] Setting JSON to true
	I0128 10:21:13.127197    3889 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1248,"bootTime":1674928825,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 10:21:13.127278    3889 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:21:13.149090    3889 out.go:97] [download-only-289000] minikube v1.29.0 on Darwin 13.2
	I0128 10:21:13.149293    3889 notify.go:220] Checking for updates...
	I0128 10:21:13.171099    3889 out.go:169] MINIKUBE_LOCATION=15565
	I0128 10:21:13.192950    3889 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 10:21:13.215168    3889 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:21:13.237134    3889 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:21:13.258925    3889 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	W0128 10:21:13.301923    3889 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0128 10:21:13.302604    3889 config.go:180] Loaded profile config "download-only-289000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0128 10:21:13.302693    3889 start.go:765] api.Load failed for download-only-289000: filestore "download-only-289000": Docker machine "download-only-289000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0128 10:21:13.302773    3889 driver.go:365] Setting default libvirt URI to qemu:///system
	W0128 10:21:13.302810    3889 start.go:765] api.Load failed for download-only-289000: filestore "download-only-289000": Docker machine "download-only-289000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0128 10:21:13.362835    3889 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:21:13.362951    3889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:21:13.510880    3889 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-28 18:21:13.413222357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:21:13.532201    3889 out.go:97] Using the docker driver based on existing profile
	I0128 10:21:13.532299    3889 start.go:296] selected driver: docker
	I0128 10:21:13.532339    3889 start.go:857] validating driver "docker" against &{Name:download-only-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-289000 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:21:13.532640    3889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:21:13.676629    3889 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-28 18:21:13.583131525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:21:13.679107    3889 cni.go:84] Creating CNI manager for ""
	I0128 10:21:13.679129    3889 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 10:21:13.679146    3889 start_flags.go:319] config:
	{Name:download-only-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:21:13.700512    3889 out.go:97] Starting control plane node download-only-289000 in cluster download-only-289000
	I0128 10:21:13.700619    3889 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 10:21:13.723380    3889 out.go:97] Pulling base image ...
	I0128 10:21:13.723433    3889 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 10:21:13.723514    3889 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 10:21:13.776958    3889 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
	I0128 10:21:13.777120    3889 image.go:61] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local cache directory
	I0128 10:21:13.777141    3889 image.go:64] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local cache directory, skipping pull
	I0128 10:21:13.777147    3889 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in cache, skipping pull
	I0128 10:21:13.777154    3889 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 as a tarball
	I0128 10:21:13.784166    3889 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 10:21:13.784186    3889 cache.go:57] Caching tarball of preloaded images
	I0128 10:21:13.784460    3889 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 10:21:13.806330    3889 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0128 10:21:13.806438    3889 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:21:13.888845    3889 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15565-2556/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-289000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-289000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (12.2s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-006000 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-006000 --force --alsologtostderr --driver=docker : (11.094248906s)
helpers_test.go:175: Cleaning up "download-docker-006000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-006000
--- PASS: TestDownloadOnlyKic (12.20s)

                                                
                                    
x
+
TestBinaryMirror (1.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-936000 --alsologtostderr --binary-mirror http://127.0.0.1:49466 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-936000 --alsologtostderr --binary-mirror http://127.0.0.1:49466 --driver=docker : (1.055910977s)
helpers_test.go:175: Cleaning up "binary-mirror-936000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-936000
--- PASS: TestBinaryMirror (1.68s)

                                                
                                    
x
+
TestOffline (54.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-749000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-749000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (51.218046107s)
helpers_test.go:175: Cleaning up "offline-docker-749000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-749000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-749000: (2.808464129s)
--- PASS: TestOffline (54.03s)

                                                
                                    
x
+
TestAddons/Setup (148.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-869000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-869000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.303417321s)
--- PASS: TestAddons/Setup (148.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 1.961709ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-llt9x" [69830a63-6fb1-4cc3-8036-b05997c86b74] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009383224s
addons_test.go:380: (dbg) Run:  kubectl --context addons-869000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-869000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.454476ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-qfhjh" [82d5c3c8-e5de-44bd-8e26-45be627b770d] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01015083s
addons_test.go:438: (dbg) Run:  kubectl --context addons-869000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-869000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.322386577s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-869000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 4.783891ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4f10b221-2a4d-431f-aa40-8862fb2db7f8] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [4f10b221-2a4d-431f-aa40-8862fb2db7f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [4f10b221-2a4d-431f-aa40-8862fb2db7f8] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.00965312s
addons_test.go:549: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/snapshot.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-869000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:419: (dbg) Run:  kubectl --context addons-869000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-869000 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-869000 delete pod task-pv-pod: (1.068064731s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-869000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ca7698d3-1053-4632-8b01-1ee0160a1a96] Pending
helpers_test.go:344: "task-pv-pod-restore" [ca7698d3-1053-4632-8b01-1ee0160a1a96] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod-restore" [ca7698d3-1053-4632-8b01-1ee0160a1a96] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.011555353s
addons_test.go:591: (dbg) Run:  kubectl --context addons-869000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-869000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-869000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-869000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-869000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.000327832s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-869000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-869000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-869000 --alsologtostderr -v=1: (1.498030836s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-mj96p" [7d07b1a9-3dfc-4b51-9b18-26141a1ffeb1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-mj96p" [7d07b1a9-3dfc-4b51-9b18-26141a1ffeb1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.007546993s
--- PASS: TestAddons/parallel/Headlamp (10.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:344: "cloud-spanner-emulator-769b7f8b64-4whzf" [4d097595-e359-47bb-b21c-d0bc2897193c] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009693797s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-869000
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-869000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-869000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-869000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-869000: (11.126806601s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-869000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-869000
--- PASS: TestAddons/StoppedEnableDisable (11.58s)

                                                
                                    
x
+
TestCertOptions (35.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-969000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-969000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (32.311086322s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-969000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-969000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-969000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-969000: (2.74469068s)
--- PASS: TestCertOptions (35.99s)

                                                
                                    
x
+
TestCertExpiration (252.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-293000 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-293000 --memory=2048 --cert-expiration=3m --driver=docker : (36.498833057s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-293000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0128 10:59:57.563909    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-293000 --memory=2048 --cert-expiration=8760h --driver=docker : (32.877223289s)
helpers_test.go:175: Cleaning up "cert-expiration-293000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-293000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-293000: (2.646321829s)
--- PASS: TestCertExpiration (252.02s)

                                                
                                    
x
+
TestDockerFlags (37.22s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-752000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-752000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (33.666111873s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-752000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-752000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-752000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-752000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-752000: (2.536666843s)
--- PASS: TestDockerFlags (37.22s)

                                                
                                    
x
+
TestForceSystemdFlag (38.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-115000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-115000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (35.254881239s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-115000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-115000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-115000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-115000: (2.703190984s)
--- PASS: TestForceSystemdFlag (38.47s)

                                                
                                    
x
+
TestForceSystemdEnv (37.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-205000 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-205000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (34.14890472s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-205000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-205000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-205000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-205000: (2.742135015s)
--- PASS: TestForceSystemdEnv (37.44s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.04s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.04s)

                                                
                                    
x
+
TestErrorSpam/setup (32.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-848000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-848000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 --driver=docker : (32.013547185s)
--- PASS: TestErrorSpam/setup (32.01s)

                                                
                                    
x
+
TestErrorSpam/start (2.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 start --dry-run
--- PASS: TestErrorSpam/start (2.39s)

                                                
                                    
x
+
TestErrorSpam/status (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 status
--- PASS: TestErrorSpam/status (1.29s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.05s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 unpause
--- PASS: TestErrorSpam/unpause (2.05s)

                                                
                                    
x
+
TestErrorSpam/stop (11.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 stop: (10.903096899s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-848000 stop
--- PASS: TestErrorSpam/stop (11.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15565-2556/.minikube/files/etc/test/nested/copy/3849/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-000000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-000000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (46.26910185s)
--- PASS: TestFunctional/serial/StartWithProxy (46.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-000000 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-000000 --alsologtostderr -v=8: (31.995188577s)
functional_test.go:656: soft start took 31.995640962s for "functional-000000" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-000000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 cache add k8s.gcr.io/pause:3.1: (2.565306944s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 cache add k8s.gcr.io/pause:3.3: (2.657078486s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 cache add k8s.gcr.io/pause:latest: (2.430149512s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-000000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local198692859/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 cache add minikube-local-cache-test:functional-000000
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 cache add minikube-local-cache-test:functional-000000: (1.14839371s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 cache delete minikube-local-cache-test:functional-000000
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-000000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-000000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (417.941441ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 cache reload: (1.589925294s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 kubectl -- --context functional-000000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-000000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-000000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-000000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.601843659s)
functional_test.go:754: restart took 38.602042619s for "functional-000000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-000000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 logs: (3.276645466s)
--- PASS: TestFunctional/serial/LogsCmd (3.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd1494506536/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd1494506536/001/logs.txt: (3.274157113s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-000000 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-000000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 6136: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-000000 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-000000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (875.594767ms)

                                                
                                                
-- stdout --
	* [functional-000000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:29:26.337892    6025 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:29:26.338075    6025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:29:26.338081    6025 out.go:309] Setting ErrFile to fd 2...
	I0128 10:29:26.338085    6025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:29:26.338192    6025 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 10:29:26.338667    6025 out.go:303] Setting JSON to false
	I0128 10:29:26.359540    6025 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1741,"bootTime":1674928825,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 10:29:26.359636    6025 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:29:26.380939    6025 out.go:177] * [functional-000000] minikube v1.29.0 on Darwin 13.2
	I0128 10:29:26.423232    6025 notify.go:220] Checking for updates...
	I0128 10:29:26.444880    6025 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 10:29:26.486914    6025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 10:29:26.528977    6025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:29:26.570781    6025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:29:26.628981    6025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 10:29:26.705194    6025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 10:29:26.729081    6025 config.go:180] Loaded profile config "functional-000000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 10:29:26.729860    6025 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:29:26.798835    6025 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:29:26.798971    6025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:29:26.954979    6025 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 18:29:26.85568726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:29:26.996041    6025 out.go:177] * Using the docker driver based on existing profile
	I0128 10:29:27.016967    6025 start.go:296] selected driver: docker
	I0128 10:29:27.016993    6025 start.go:857] validating driver "docker" against &{Name:functional-000000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-000000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:29:27.017144    6025 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 10:29:27.062145    6025 out.go:177] 
	W0128 10:29:27.083052    6025 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0128 10:29:27.104149    6025 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-000000 --dry-run --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:984: (dbg) Done: out/minikube-darwin-amd64 start -p functional-000000 --dry-run --alsologtostderr -v=1 --driver=docker : (1.007285858s)
--- PASS: TestFunctional/parallel/DryRun (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-000000 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-000000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (730.038017ms)

                                                
                                                
-- stdout --
	* [functional-000000] minikube v1.29.0 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:29:25.595019    6000 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:29:25.595181    6000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:29:25.595186    6000 out.go:309] Setting ErrFile to fd 2...
	I0128 10:29:25.595190    6000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:29:25.595309    6000 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 10:29:25.595762    6000 out.go:303] Setting JSON to false
	I0128 10:29:25.616759    6000 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1740,"bootTime":1674928825,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0128 10:29:25.616862    6000 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:29:25.640925    6000 out.go:177] * [functional-000000] minikube v1.29.0 sur Darwin 13.2
	I0128 10:29:25.683060    6000 notify.go:220] Checking for updates...
	I0128 10:29:25.703853    6000 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 10:29:25.746298    6000 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	I0128 10:29:25.768022    6000 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:29:25.789131    6000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:29:25.830805    6000 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	I0128 10:29:25.873149    6000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 10:29:25.894592    6000 config.go:180] Loaded profile config "functional-000000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 10:29:25.895187    6000 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:29:25.961667    6000 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:29:25.961791    6000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:29:26.118416    6000 info.go:266] docker info: {ID:ZQRK:VGKA:BF77:XHQV:PEXM:RJZJ:SOX5:CSWG:2I44:5P6X:JP3N:A4CH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 18:29:26.017422487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:29:26.140433    6000 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0128 10:29:26.161069    6000 start.go:296] selected driver: docker
	I0128 10:29:26.161089    6000 start.go:857] validating driver "docker" against &{Name:functional-000000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-000000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:29:26.161221    6000 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 10:29:26.185814    6000 out.go:177] 
	W0128 10:29:26.207183    6000 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0128 10:29:26.228271    6000 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
E0128 10:29:24.663381    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-000000 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-000000 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-q2tb2" [46ccf271-00dc-48e3-aa08-f8e3a6b1b6a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-q2tb2" [46ccf271-00dc-48e3-aa08-f8e3a6b1b6a1] Running
E0128 10:29:14.423221    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.008939994s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 service --namespace=default --https --url hello-node: (2.061455192s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50347
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 service hello-node --url --format={{.IP}}: (2.028365016s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 service hello-node --url: (2.041515738s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50376
--- PASS: TestFunctional/parallel/ServiceCmd (14.01s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5d5723c2-3561-482e-9441-ace8481f440c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009510728s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-000000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-000000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-000000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-000000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c8f113ad-da3d-4ac0-8581-218ba8c84a14] Pending
helpers_test.go:344: "sp-pod" [c8f113ad-da3d-4ac0-8581-218ba8c84a14] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [c8f113ad-da3d-4ac0-8581-218ba8c84a14] Running
E0128 10:29:04.182205    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:29:04.188144    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:29:04.198286    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:29:04.219385    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:29:04.259588    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:29:04.340029    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:29:04.500161    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:29:04.820453    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.010184173s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-000000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-000000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-000000 delete -f testdata/storage-provisioner/pod.yaml: (1.005806951s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-000000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [396b1ba8-df12-4782-9154-598d5853210c] Pending
helpers_test.go:344: "sp-pod" [396b1ba8-df12-4782-9154-598d5853210c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0128 10:29:09.301691    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [396b1ba8-df12-4782-9154-598d5853210c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.008395604s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-000000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh -n functional-000000 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 cp functional-000000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd259276118/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh -n functional-000000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-000000 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-x8tn5" [95d0a22f-b3a9-4162-9cb6-1e8078402236] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-x8tn5" [95d0a22f-b3a9-4162-9cb6-1e8078402236] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.007096596s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000000 exec mysql-888f84dd9-x8tn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-000000 exec mysql-888f84dd9-x8tn5 -- mysql -ppassword -e "show databases;": exit status 1 (172.50825ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000000 exec mysql-888f84dd9-x8tn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-000000 exec mysql-888f84dd9-x8tn5 -- mysql -ppassword -e "show databases;": exit status 1 (113.860748ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000000 exec mysql-888f84dd9-x8tn5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/3849/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo cat /etc/test/nested/copy/3849/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/3849.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo cat /etc/ssl/certs/3849.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/3849.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo cat /usr/share/ca-certificates/3849.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/38492.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo cat /etc/ssl/certs/38492.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/38492.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo cat /usr/share/ca-certificates/38492.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-000000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo systemctl is-active crio"
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-000000 ssh "sudo systemctl is-active crio": exit status 1 (524.770469ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-000000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-000000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1107fba0-5316-44b8-a51c-4ffa2a148ee3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [1107fba0-5316-44b8-a51c-4ffa2a148ee3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.061507704s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-000000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-000000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 5750: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "448.084536ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "90.188929ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "505.113332ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "83.705226ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-000000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3867147928/001:/mount-9p --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:103: wrote "test-1674930558080776000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3867147928/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1674930558080776000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3867147928/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1674930558080776000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3867147928/001/test-1674930558080776000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-000000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (517.818013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 28 18:29 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 28 18:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 28 18:29 test-1674930558080776000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh cat /mount-9p/test-1674930558080776000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-000000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [772cddd6-355e-448f-b9a8-3c89ad1bb4cf] Pending
helpers_test.go:344: "busybox-mount" [772cddd6-355e-448f-b9a8-3c89ad1bb4cf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [772cddd6-355e-448f-b9a8-3c89ad1bb4cf] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [772cddd6-355e-448f-b9a8-3c89ad1bb4cf] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.009100235s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-000000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-000000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3867147928/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-000000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3174096368/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-000000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (485.417165ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-000000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3174096368/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-000000 ssh "sudo umount -f /mount-9p": exit status 1 (405.001607ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-000000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-000000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3174096368/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 version -o=json --components
functional_test.go:2197: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 version -o=json --components: (1.153172016s)
--- PASS: TestFunctional/parallel/Version/components (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-000000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-000000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-000000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-000000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-000000 | 3d5648c7b4665 | 30B    |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| gcr.io/google-containers/addon-resizer      | functional-000000 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-000000 image ls --format json:
[{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"3d5648c7b4665d1c00a2e8c90dbbbccea4a9c60fa9c0b46335a68a7da64a137b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-000000"],"size":"30"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d588
9c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-000000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.i
o/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"
a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-000000 image ls --format yaml:
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-000000
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 3d5648c7b4665d1c00a2e8c90dbbbccea4a9c60fa9c0b46335a68a7da64a137b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-000000
size: "30"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-000000 ssh pgrep buildkitd: exit status 1 (415.742919ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image build -t localhost/my-image:functional-000000 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 image build -t localhost/my-image:functional-000000 testdata/build: (2.788617237s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-000000 image build -t localhost/my-image:functional-000000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in f61a63cc9428
Removing intermediate container f61a63cc9428
---> e415e4d3a7d1
Step 3/3 : ADD content.txt /
---> cd0de98dea36
Successfully built cd0de98dea36
Successfully tagged localhost/my-image:functional-000000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.495615043s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-000000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image load --daemon gcr.io/google-containers/addon-resizer:functional-000000
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 image load --daemon gcr.io/google-containers/addon-resizer:functional-000000: (4.769674827s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image load --daemon gcr.io/google-containers/addon-resizer:functional-000000
2023/01/28 10:29:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 image load --daemon gcr.io/google-containers/addon-resizer:functional-000000: (2.48453482s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.009806988s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-000000
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image load --daemon gcr.io/google-containers/addon-resizer:functional-000000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 image load --daemon gcr.io/google-containers/addon-resizer:functional-000000: (3.497807734s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.93s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-000000 docker-env) && out/minikube-darwin-amd64 status -p functional-000000"
E0128 10:29:45.144170    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-000000 docker-env) && out/minikube-darwin-amd64 status -p functional-000000": (1.129766053s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-000000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image save gcr.io/google-containers/addon-resizer:functional-000000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 image save gcr.io/google-containers/addon-resizer:functional-000000 /Users/jenkins/workspace/addon-resizer-save.tar: (1.290936359s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image rm gcr.io/google-containers/addon-resizer:functional-000000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (2.593124206s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-000000
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-000000 image save --daemon gcr.io/google-containers/addon-resizer:functional-000000
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-000000 image save --daemon gcr.io/google-containers/addon-resizer:functional-000000: (3.069380987s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-000000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.21s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-000000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-000000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-000000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-084000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-084000: (2.215389073s)
--- PASS: TestImageBuild/serial/NormalBuild (2.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-084000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-084000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-084000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.42s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-271000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0128 10:38:47.294398    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-271000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (55.176207233s)
--- PASS: TestJSONOutput/start/Command (55.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-271000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-271000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-271000 --output=json --user=testUser
E0128 10:39:04.181915    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-271000 --output=json --user=testUser: (5.831351997s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-538000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-538000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (360.121516ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d57c898-c1de-4c05-a7ed-1c2451cb3734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-538000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0081ed7-4fab-46db-86e6-90677197b310","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"c2d50db4-be36-42b4-be67-1e4214e90e26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig"}}
	{"specversion":"1.0","id":"cf2c2724-c322-4a00-9057-7059de827b75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"dddac878-4800-4d82-b9e7-031dbe82cfe3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4e8ce3c5-af57-4a9f-bcf9-68c035fc4488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube"}}
	{"specversion":"1.0","id":"26785971-2339-4d0f-ad95-4c5dbc44c216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"39938296-2738-4bb2-add6-034892e8aa95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-538000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-538000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-275000 --network=
E0128 10:39:15.015843    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-275000 --network=: (29.272466637s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-275000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-275000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-275000: (2.646205425s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-591000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-591000 --network=bridge: (34.084452161s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-591000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-591000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-591000: (2.419341478s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.56s)

                                                
                                    
x
+
TestKicExistingNetwork (38.16s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-996000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-996000 --network=existing-network: (35.362933751s)
helpers_test.go:175: Cleaning up "existing-network-996000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-996000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-996000: (2.433764149s)
--- PASS: TestKicExistingNetwork (38.16s)

                                                
                                    
x
+
TestKicCustomSubnet (36.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-474000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-474000 --subnet=192.168.60.0/24: (33.731850861s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-474000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-474000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-474000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-474000: (2.59691971s)
--- PASS: TestKicCustomSubnet (36.39s)

                                                
                                    
x
+
TestKicStaticIP (32.62s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-787000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-787000 --static-ip=192.168.200.200: (29.697310227s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-787000 ip
helpers_test.go:175: Cleaning up "static-ip-787000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-787000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-787000: (2.674601371s)
--- PASS: TestKicStaticIP (32.62s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (69.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-133000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-133000 --driver=docker : (32.334643412s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-136000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-136000 --driver=docker : (29.779792805s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-133000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-136000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-136000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-136000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-136000: (2.625242164s)
helpers_test.go:175: Cleaning up "first-133000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-133000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-133000: (2.610831691s)
--- PASS: TestMinikubeProfile (69.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-865000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-865000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.233951982s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-865000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-882000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-882000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.257983278s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-882000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.15s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-865000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-865000 --alsologtostderr -v=5: (2.145951112s)
--- PASS: TestMountStart/serial/DeleteFirst (2.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-882000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-882000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-882000: (1.566009252s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-882000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-882000: (4.876066828s)
--- PASS: TestMountStart/serial/RestartStopped (5.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-882000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-940000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0128 10:44:04.179474    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-940000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m18.156388785s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-940000 -- rollout status deployment/busybox: (7.664998413s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-nqnjm -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-xzm62 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-nqnjm -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-xzm62 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-nqnjm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-xzm62 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-nqnjm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-nqnjm -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-xzm62 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-940000 -- exec busybox-6b86dd6d48-xzm62 -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-940000 -v 3 --alsologtostderr
E0128 10:45:27.225227    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-940000 -v 3 --alsologtostderr: (26.285766439s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr: (1.043674153s)
--- PASS: TestMultiNode/serial/AddNode (27.33s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.52s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (15.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-940000 status --output json --alsologtostderr: (1.077527045s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp testdata/cp-test.txt multinode-940000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3871458454/001/cp-test_multinode-940000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000:/home/docker/cp-test.txt multinode-940000-m02:/home/docker/cp-test_multinode-940000_multinode-940000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m02 "sudo cat /home/docker/cp-test_multinode-940000_multinode-940000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000:/home/docker/cp-test.txt multinode-940000-m03:/home/docker/cp-test_multinode-940000_multinode-940000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m03 "sudo cat /home/docker/cp-test_multinode-940000_multinode-940000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp testdata/cp-test.txt multinode-940000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3871458454/001/cp-test_multinode-940000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000-m02:/home/docker/cp-test.txt multinode-940000:/home/docker/cp-test_multinode-940000-m02_multinode-940000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000 "sudo cat /home/docker/cp-test_multinode-940000-m02_multinode-940000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000-m02:/home/docker/cp-test.txt multinode-940000-m03:/home/docker/cp-test_multinode-940000-m02_multinode-940000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m03 "sudo cat /home/docker/cp-test_multinode-940000-m02_multinode-940000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp testdata/cp-test.txt multinode-940000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3871458454/001/cp-test_multinode-940000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000-m03:/home/docker/cp-test.txt multinode-940000:/home/docker/cp-test_multinode-940000-m03_multinode-940000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000 "sudo cat /home/docker/cp-test_multinode-940000-m03_multinode-940000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 cp multinode-940000-m03:/home/docker/cp-test.txt multinode-940000-m02:/home/docker/cp-test_multinode-940000-m03_multinode-940000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 ssh -n multinode-940000-m02 "sudo cat /home/docker/cp-test_multinode-940000-m03_multinode-940000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (15.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-940000 node stop m03: (1.535778369s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-940000 status: exit status 7 (778.879394ms)

                                                
                                                
-- stdout --
	multinode-940000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-940000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-940000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr: exit status 7 (826.362829ms)

                                                
                                                
-- stdout --
	multinode-940000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-940000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-940000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:46:02.587104   10123 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:46:02.587339   10123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:46:02.587344   10123 out.go:309] Setting ErrFile to fd 2...
	I0128 10:46:02.587348   10123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:46:02.587457   10123 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 10:46:02.587644   10123 out.go:303] Setting JSON to false
	I0128 10:46:02.587666   10123 mustload.go:65] Loading cluster: multinode-940000
	I0128 10:46:02.587715   10123 notify.go:220] Checking for updates...
	I0128 10:46:02.587929   10123 config.go:180] Loaded profile config "multinode-940000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 10:46:02.587943   10123 status.go:255] checking status of multinode-940000 ...
	I0128 10:46:02.588343   10123 cli_runner.go:164] Run: docker container inspect multinode-940000 --format={{.State.Status}}
	I0128 10:46:02.647840   10123 status.go:330] multinode-940000 host status = "Running" (err=<nil>)
	I0128 10:46:02.647867   10123 host.go:66] Checking if "multinode-940000" exists ...
	I0128 10:46:02.648108   10123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-940000
	I0128 10:46:02.707601   10123 host.go:66] Checking if "multinode-940000" exists ...
	I0128 10:46:02.707882   10123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 10:46:02.707969   10123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-940000
	I0128 10:46:02.816410   10123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51366 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/multinode-940000/id_rsa Username:docker}
	I0128 10:46:02.907003   10123 ssh_runner.go:195] Run: systemctl --version
	I0128 10:46:02.911505   10123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 10:46:02.921123   10123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-940000
	I0128 10:46:02.981820   10123 kubeconfig.go:92] found "multinode-940000" server: "https://127.0.0.1:51365"
	I0128 10:46:02.981850   10123 api_server.go:165] Checking apiserver status ...
	I0128 10:46:02.981894   10123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 10:46:02.992963   10123 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2026/cgroup
	W0128 10:46:03.001542   10123 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2026/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0128 10:46:03.001600   10123 ssh_runner.go:195] Run: ls
	I0128 10:46:03.005898   10123 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51365/healthz ...
	I0128 10:46:03.011501   10123 api_server.go:278] https://127.0.0.1:51365/healthz returned 200:
	ok
	I0128 10:46:03.011517   10123 status.go:421] multinode-940000 apiserver status = Running (err=<nil>)
	I0128 10:46:03.011533   10123 status.go:257] multinode-940000 status: &{Name:multinode-940000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0128 10:46:03.011544   10123 status.go:255] checking status of multinode-940000-m02 ...
	I0128 10:46:03.011796   10123 cli_runner.go:164] Run: docker container inspect multinode-940000-m02 --format={{.State.Status}}
	I0128 10:46:03.070572   10123 status.go:330] multinode-940000-m02 host status = "Running" (err=<nil>)
	I0128 10:46:03.070597   10123 host.go:66] Checking if "multinode-940000-m02" exists ...
	I0128 10:46:03.070891   10123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-940000-m02
	I0128 10:46:03.130697   10123 host.go:66] Checking if "multinode-940000-m02" exists ...
	I0128 10:46:03.130966   10123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 10:46:03.131014   10123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-940000-m02
	I0128 10:46:03.191735   10123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51443 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2556/.minikube/machines/multinode-940000-m02/id_rsa Username:docker}
	I0128 10:46:03.284385   10123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 10:46:03.294292   10123 status.go:257] multinode-940000-m02 status: &{Name:multinode-940000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0128 10:46:03.294313   10123 status.go:255] checking status of multinode-940000-m03 ...
	I0128 10:46:03.294595   10123 cli_runner.go:164] Run: docker container inspect multinode-940000-m03 --format={{.State.Status}}
	I0128 10:46:03.353475   10123 status.go:330] multinode-940000-m03 host status = "Stopped" (err=<nil>)
	I0128 10:46:03.353496   10123 status.go:343] host is not running, skipping remaining checks
	I0128 10:46:03.353506   10123 status.go:257] multinode-940000-m03 status: &{Name:multinode-940000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-940000 node start m03 --alsologtostderr: (9.384201043s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-940000 status: (1.016454391s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-940000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-940000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-940000: (23.222180553s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-940000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-940000 --wait=true -v=8 --alsologtostderr: (1m0.044260514s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-940000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-940000 node delete m03: (5.413301557s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-940000 stop: (21.658294628s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-940000 status: exit status 7 (172.204687ms)

                                                
                                                
-- stdout --
	multinode-940000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-940000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr: exit status 7 (175.376241ms)

                                                
                                                
-- stdout --
	multinode-940000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-940000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:48:05.482437   10675 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:48:05.482618   10675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:48:05.482623   10675 out.go:309] Setting ErrFile to fd 2...
	I0128 10:48:05.482627   10675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:48:05.482745   10675 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2556/.minikube/bin
	I0128 10:48:05.482932   10675 out.go:303] Setting JSON to false
	I0128 10:48:05.482955   10675 mustload.go:65] Loading cluster: multinode-940000
	I0128 10:48:05.482988   10675 notify.go:220] Checking for updates...
	I0128 10:48:05.483243   10675 config.go:180] Loaded profile config "multinode-940000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 10:48:05.483255   10675 status.go:255] checking status of multinode-940000 ...
	I0128 10:48:05.483648   10675 cli_runner.go:164] Run: docker container inspect multinode-940000 --format={{.State.Status}}
	I0128 10:48:05.542043   10675 status.go:330] multinode-940000 host status = "Stopped" (err=<nil>)
	I0128 10:48:05.542060   10675 status.go:343] host is not running, skipping remaining checks
	I0128 10:48:05.542066   10675 status.go:257] multinode-940000 status: &{Name:multinode-940000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0128 10:48:05.542093   10675 status.go:255] checking status of multinode-940000-m02 ...
	I0128 10:48:05.542338   10675 cli_runner.go:164] Run: docker container inspect multinode-940000-m02 --format={{.State.Status}}
	I0128 10:48:05.600022   10675 status.go:330] multinode-940000-m02 host status = "Stopped" (err=<nil>)
	I0128 10:48:05.600048   10675 status.go:343] host is not running, skipping remaining checks
	I0128 10:48:05.600057   10675 status.go:257] multinode-940000-m02 status: &{Name:multinode-940000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-940000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0128 10:48:47.288105    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-940000 --wait=true -v=8 --alsologtostderr --driver=docker : (51.79805189s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-940000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-940000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-940000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-940000-m02 --driver=docker : exit status 14 (620.30943ms)

                                                
                                                
-- stdout --
	* [multinode-940000-m02] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-940000-m02' is duplicated with machine name 'multinode-940000-m02' in profile 'multinode-940000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-940000-m03 --driver=docker 
E0128 10:49:04.176664    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-940000-m03 --driver=docker : (34.163084225s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-940000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-940000: exit status 80 (525.05448ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-940000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-940000-m03 already exists in multinode-940000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-940000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-940000-m03: (2.61862883s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.00s)

                                                
                                    
x
+
TestPreload (120.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-443000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0128 10:50:10.372189    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-443000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m0.312290774s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-443000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-443000 -- docker pull gcr.io/k8s-minikube/busybox: (2.187758486s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-443000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-443000: (11.008896444s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-443000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-443000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (43.547299049s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-443000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-443000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-443000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-443000: (2.723123114s)
--- PASS: TestPreload (120.22s)

                                                
                                    
x
+
TestScheduledStopUnix (107.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-645000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-645000 --memory=2048 --driver=docker : (33.466697704s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-645000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-645000 -n scheduled-stop-645000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-645000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-645000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-645000 -n scheduled-stop-645000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-645000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-645000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-645000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-645000: exit status 7 (115.931773ms)

                                                
                                                
-- stdout --
	scheduled-stop-645000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-645000 -n scheduled-stop-645000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-645000 -n scheduled-stop-645000: exit status 7 (113.23134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-645000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-645000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-645000: (2.324549554s)
--- PASS: TestScheduledStopUnix (107.79s)

                                                
                                    
x
+
TestSkaffold (60.42s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe70976973 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-449000 --memory=2600 --driver=docker 
E0128 10:53:47.295025    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-449000 --memory=2600 --driver=docker : (29.022123801s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe70976973 run --minikube-profile skaffold-449000 --kube-context skaffold-449000 --status-check=true --port-forward=false --interactive=false
E0128 10:54:04.182820    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe70976973 run --minikube-profile skaffold-449000 --kube-context skaffold-449000 --status-check=true --port-forward=false --interactive=false: (17.03561738s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6d69c56887-8vlc9" [77bda083-cb4f-4587-9f35-b6e17480a671] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.01436359s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5dfccd744b-n6d9q" [baad0f5b-69b5-4a83-8973-db65e3736307] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008744128s
helpers_test.go:175: Cleaning up "skaffold-449000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-449000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-449000: (2.968048294s)
--- PASS: TestSkaffold (60.42s)

                                                
                                    
x
+
TestInsufficientStorage (14.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-410000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-410000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.348681432s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"862a2b35-1efc-4e9a-a063-df064b379c78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-410000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1c1faf5-2a59-4c58-a33a-941193ae369f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"18dfa369-4614-42ba-a1fb-52627a1038b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig"}}
	{"specversion":"1.0","id":"5f84286e-a071-463d-be4e-3c5d411eed2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"b1b56623-efb9-4278-8984-608864176230","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41919947-fc57-42cf-b242-fc29b0fce0e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube"}}
	{"specversion":"1.0","id":"005840fb-f5d0-4758-b21e-49642095cf81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"36c42813-c4e2-4eea-b95c-91f05917fb54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e35cf0b6-515d-4a5d-9967-dd9bcbaf4299","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8ce4c13d-a19f-4543-803a-acf74b7029b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d620a9ed-6798-4753-b03e-f1dddc3786a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"d88fd691-2e8e-4885-91d1-6aa2181666c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-410000 in cluster insufficient-storage-410000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"afb77fd7-c780-4890-b4f2-9e8a79bc7d3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3632a5e2-6285-4b2b-8e0c-ddef97970860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6f8be7f-384d-450d-8e1f-d6ec95595ab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-410000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-410000 --output=json --layout=cluster: exit status 7 (408.579128ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-410000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-410000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:54:41.340081   12423 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-410000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-410000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-410000 --output=json --layout=cluster: exit status 7 (404.23954ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-410000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-410000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:54:41.744749   12433 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-410000" does not appear in /Users/jenkins/minikube-integration/15565-2556/kubeconfig
	E0128 10:54:41.753815   12433 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/insufficient-storage-410000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-410000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-410000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-410000: (2.404661492s)
--- PASS: TestInsufficientStorage (14.57s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.95s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2806791014/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2806791014/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2806791014/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2806791014/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-118000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-118000: (3.535873507s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                    
x
+
TestPause/serial/Start (54.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-664000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0128 11:02:00.443623    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 11:02:07.225194    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-664000 --memory=2048 --install-addons=false --wait=all --driver=docker : (54.583603851s)
--- PASS: TestPause/serial/Start (54.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-664000 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-664000 --alsologtostderr -v=1 --driver=docker : (54.603914215s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (54.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-664000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-664000 --output=json --layout=cluster

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-664000 --output=json --layout=cluster: exit status 2 (462.244834ms)

                                                
                                                
-- stdout --
	{"Name":"pause-664000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-664000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-664000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-664000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-darwin-amd64 pause -p pause-664000 --alsologtostderr -v=5: (1.032438899s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-664000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-664000 --alsologtostderr -v=5: (2.94595824s)
--- PASS: TestPause/serial/DeletePaused (2.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-664000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-664000: exit status 1 (60.045625ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-664000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-574000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-574000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (415.737015ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-574000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2556/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2556/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-574000 --driver=docker 
E0128 11:03:47.290611    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-574000 --driver=docker : (33.884186824s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-574000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-574000 --no-kubernetes --driver=docker 
E0128 11:04:04.176983    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-574000 --no-kubernetes --driver=docker : (14.839134604s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-574000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-574000 status -o json: exit status 2 (408.672059ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-574000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-574000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-574000: (2.465708939s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-574000 --no-kubernetes --driver=docker 
E0128 11:04:16.594332    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-574000 --no-kubernetes --driver=docker : (7.355530382s)
--- PASS: TestNoKubernetes/serial/Start (7.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-574000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-574000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (402.634705ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (35.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (19.782712329s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
E0128 11:04:44.282320    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (15.231930485s)
--- PASS: TestNoKubernetes/serial/ProfileList (35.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-574000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-574000: (1.61015419s)
--- PASS: TestNoKubernetes/serial/Stop (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-574000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-574000 --driver=docker : (4.983538671s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-574000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-574000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (392.371172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (54.916099926s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-360000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7xfdp" [6a8a52ab-f056-4baa-8ae6-f0e983f5efd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-7xfdp" [6a8a52ab-f056-4baa-8ae6-f0e983f5efd5] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.008494444s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0128 11:06:50.373730    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m14.061528459s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l8qt4" [35424d1c-7a6e-48c5-be29-8eae30a7b5ca] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.020102085s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (19.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-360000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-zl88l" [6c3163d4-cae0-4205-8486-bafea7c6b1f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-zl88l" [6c3163d4-cae0-4205-8486-bafea7c6b1f6] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 19.010843162s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (19.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (1m8.398510893s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (53.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
E0128 11:08:47.285948    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory
E0128 11:09:04.174316    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 11:09:16.590592    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (53.816236773s)
--- PASS: TestNetworkPlugins/group/false/Start (53.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (19.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-360000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bnvfh" [09555fe8-9427-4332-8d32-8feaac15d525] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
helpers_test.go:344: "netcat-694fc96674-bnvfh" [09555fe8-9427-4332-8d32-8feaac15d525] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 19.012001482s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (19.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-360000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-xmx98" [380c6a76-dd8b-4ad9-bd96-d4aedf019c31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:344: "netcat-694fc96674-xmx98" [380c6a76-dd8b-4ad9-bd96-d4aedf019c31] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.008087722s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (53.036788742s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
E0128 11:11:02.002129    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:02.007245    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:02.017291    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:02.037342    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:02.077483    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:02.157570    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:02.317687    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:02.637858    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:03.278471    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:04.559157    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:11:07.119436    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (1m2.649987737s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-98wqq" [30d9d3fe-e0fa-4322-b6c5-7bf24bec8f15] Running
E0128 11:11:12.239487    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.015371928s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-360000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-9fxqv" [f9f89dc6-2586-4e05-a627-a3d38ce972bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:11:22.479783    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-9fxqv" [f9f89dc6-2586-4e05-a627-a3d38ce972bd] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.018831337s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m2t9k" [3f29d4e3-116a-43b2-94d8-0c1ad3b009b4] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.013250143s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (20.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-360000 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bjdfj" [95c8b9b6-936b-461a-b99f-32aeca485cde] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:11:42.960562    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-bjdfj" [95c8b9b6-936b-461a-b99f-32aeca485cde] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 20.009643965s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (20.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (54.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (54.252714039s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (54.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0128 11:12:23.920423    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (48.800632175s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-360000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-8k5dc" [e1287d1d-2952-4a35-aef3-61dbb9e20270] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:12:54.435974    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:54.442041    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:54.452237    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:54.472377    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:54.512561    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:54.592684    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:54.752785    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:55.072954    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:55.713079    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:56.993251    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:12:59.555466    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-8k5dc" [e1287d1d-2952-4a35-aef3-61dbb9e20270] Running
E0128 11:13:04.675633    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.009512582s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (20.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-360000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-rb84g" [c05c953a-6ee6-45d9-877f-962f08f72d57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:13:14.915834    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-rb84g" [c05c953a-6ee6-45d9-877f-962f08f72d57] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 20.008650653s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (20.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (53.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0128 11:13:35.396853    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:13:45.840272    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:13:47.282779    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/functional-000000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (53.176975819s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (53.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-360000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (19.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-360000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-5kvwp" [a3d08811-40c4-49ee-9018-ebbe315c8d9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:14:30.239842    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:30.246287    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:30.258419    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:30.280621    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:30.321577    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:30.401908    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:30.562048    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:30.884355    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:31.524486    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:32.804844    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:35.365001    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-5kvwp" [a3d08811-40c4-49ee-9018-ebbe315c8d9a] Running
E0128 11:14:40.415786    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:14:40.422000    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:14:40.432574    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:14:40.454257    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:14:40.487191    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:14:40.496123    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:14:40.576369    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:14:40.736500    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:14:41.056897    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:14:41.697184    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 19.008125364s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (19.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-360000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-360000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-625000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0128 11:15:11.207324    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:15:21.379595    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:15:38.276128    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:15:39.637610    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0128 11:15:52.167243    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:16:01.999789    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:16:02.339668    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:16:08.702132    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:16:08.707305    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:16:08.719395    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:16:08.741179    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:16:08.782134    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-625000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (1m2.990702808s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-625000 create -f testdata/busybox.yaml
E0128 11:16:08.863023    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8b6cda28-d6b3-4800-a379-1227323148f8] Pending
E0128 11:16:09.023355    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:16:09.343621    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [8b6cda28-d6b3-4800-a379-1227323148f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0128 11:16:09.983738    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:16:11.264034    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:16:13.824291    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [8b6cda28-d6b3-4800-a379-1227323148f8] Running
E0128 11:16:18.944421    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.015036164s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-625000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-625000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-625000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-625000 --alsologtostderr -v=3
E0128 11:16:24.282756    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:24.288332    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:24.298966    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:24.319104    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:24.361093    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:24.441356    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:24.602443    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:24.922542    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:25.563426    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:26.843798    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:29.184471    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:16:29.404303    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:29.680953    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-625000 --alsologtostderr -v=3: (11.143006378s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-625000 -n no-preload-625000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-625000 -n no-preload-625000: exit status 7 (114.19327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-625000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0128 11:16:34.525035    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (557.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-625000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0128 11:16:44.766218    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:16:49.666053    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:17:05.246753    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:17:14.088227    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/custom-flannel-360000/client.crt: no such file or directory
E0128 11:17:24.259186    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/false-360000/client.crt: no such file or directory
E0128 11:17:30.625953    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kindnet-360000/client.crt: no such file or directory
E0128 11:17:46.207065    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
E0128 11:17:49.659651    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:49.666033    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:49.678218    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:49.698326    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:49.740146    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:49.822318    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:49.983963    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:50.304935    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:50.945354    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:52.226124    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:54.433159    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
E0128 11:17:54.786710    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory
E0128 11:17:59.907082    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-625000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (9m17.247512184s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-625000 -n no-preload-625000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (557.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-867000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-867000 --alsologtostderr -v=3: (1.593297012s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-867000 -n old-k8s-version-867000: exit status 7 (119.424427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-867000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-j49h2" [8d494304-c5ea-4d21-9831-e5131c1dde5b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014135772s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-j49h2" [8d494304-c5ea-4d21-9831-e5131c1dde5b] Running
E0128 11:26:02.047747    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008606914s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-625000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-625000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-625000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-625000 -n no-preload-625000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-625000 -n no-preload-625000: exit status 2 (438.998044ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-625000 -n no-preload-625000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-625000 -n no-preload-625000: exit status 2 (435.938318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-625000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-625000 -n no-preload-625000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-625000 -n no-preload-625000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-724000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0128 11:26:24.331318    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/flannel-360000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-724000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (47.152232473s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-724000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [00d4364f-56d4-41a6-9196-966033c31124] Pending
helpers_test.go:344: "busybox" [00d4364f-56d4-41a6-9196-966033c31124] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [00d4364f-56d4-41a6-9196-966033c31124] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013876971s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-724000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-724000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-724000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-724000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-724000 --alsologtostderr -v=3: (10.975974976s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-724000 -n embed-certs-724000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-724000 -n embed-certs-724000: exit status 7 (118.609986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-724000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (556.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-724000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0128 11:27:25.091383    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/auto-360000/client.crt: no such file or directory
E0128 11:27:49.709004    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-724000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (9m16.261250778s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-724000 -n embed-certs-724000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (556.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rz22q" [67cfb2f4-ed42-4e1c-a6b6-19a1d89f681d] Running
E0128 11:36:36.712352    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/no-preload-625000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013971495s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rz22q" [67cfb2f4-ed42-4e1c-a6b6-19a1d89f681d] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009796223s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-724000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-724000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-724000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-724000 -n embed-certs-724000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-724000 -n embed-certs-724000: exit status 2 (432.363746ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-724000 -n embed-certs-724000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-724000 -n embed-certs-724000: exit status 2 (434.685225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-724000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-724000 -n embed-certs-724000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-724000 -n embed-certs-724000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-218000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-218000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (53.118325951s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-218000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6bdbec7b-2181-4172-ac8d-d36a0d5b65ca] Pending
helpers_test.go:344: "busybox" [6bdbec7b-2181-4172-ac8d-d36a0d5b65ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6bdbec7b-2181-4172-ac8d-d36a0d5b65ca] Running
E0128 11:37:49.706877    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/enable-default-cni-360000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.015684786s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-218000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-218000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0128 11:37:54.481819    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/calico-360000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-218000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-218000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-218000 --alsologtostderr -v=3: (11.036964112s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000: exit status 7 (118.982847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-218000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (308.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-218000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0128 11:38:06.821945    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/bridge-360000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-218000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (5m8.230402915s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (308.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-pf5nm" [5fa9f942-1b64-4cfe-bf3f-ea27d7f3114f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-pf5nm" [5fa9f942-1b64-4cfe-bf3f-ea27d7f3114f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.013576139s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-pf5nm" [5fa9f942-1b64-4cfe-bf3f-ea27d7f3114f] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008509649s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-218000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-218000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-218000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000: exit status 2 (492.576706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000: exit status 2 (446.025722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-218000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-218000 -n default-k8s-diff-port-218000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-047000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-047000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (43.129698198s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-047000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0128 11:44:23.094096    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/kubenet-360000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-047000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-047000 --alsologtostderr -v=3: (10.989957142s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-047000 -n newest-cni-047000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-047000 -n newest-cni-047000: exit status 7 (118.79416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-047000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-047000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-047000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (25.167233395s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-047000 -n newest-cni-047000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-047000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-047000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-047000 -n newest-cni-047000

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-047000 -n newest-cni-047000: exit status 2 (435.496036ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-047000 -n newest-cni-047000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-047000 -n newest-cni-047000: exit status 2 (489.697926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-047000 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-047000 -n newest-cni-047000

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-047000 -n newest-cni-047000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 10.801678ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-rj458" [9eb3f04c-bb83-444e-b0fc-37fa9e3ae336] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011101038s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xkcd7" [63294dcf-6c03-47b3-ae4e-53c68f7b57e1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01077702s
addons_test.go:305: (dbg) Run:  kubectl --context addons-869000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-869000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-869000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.673399795s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-869000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-869000 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:210: (dbg) Run:  kubectl --context addons-869000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [976243c7-d8c1-4dc5-ab26-c91635876335] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [976243c7-d8c1-4dc5-ab26-c91635876335] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009371154s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-869000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.96s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-000000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-000000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-b5zl9" [db3749bd-d590-4297-9592-2716c5795bc5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:344: "hello-node-connect-5cf7cc858f-b5zl9" [db3749bd-d590-4297-9592-2716c5795bc5] Running
E0128 10:29:05.460633    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory
E0128 10:29:06.740787    3849 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2556/.minikube/profiles/addons-869000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.010005284s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-360000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                
----------------------- debugLogs end: cilium-360000 [took: 6.24035263s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-360000
--- SKIP: TestNetworkPlugins/group/cilium (6.75s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-170000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-170000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.43s)

                                                
                                    
Copied to clipboard